<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.aclweb.org/adminwiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=ChristyDoran</id>
	<title>Admin Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.aclweb.org/adminwiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=ChristyDoran"/>
	<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=Special:Contributions/ChristyDoran"/>
	<updated>2026-04-26T04:47:14Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.6</generator>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73297</id>
		<title>2019Q3 Reports: NAACL 2019</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73297"/>
		<updated>2019-08-06T21:44:06Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Detailed statistics by area */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks. A/V failures the first day of the conference have made it hard to assess effectiveness. &lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** Link to D&amp;amp;I report will be included when it is available.&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
* Student Research Papers&lt;br /&gt;
: talks and posters from the SRW were integrated into the main conference program. Positive feedback was received about this, better experience for students.&lt;br /&gt;
&lt;br /&gt;
= Submissions rates and distributions =&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Long&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Short&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Total&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;|TACL&lt;br /&gt;
|- &lt;br /&gt;
| Reviewed &lt;br /&gt;
| 1067 &lt;br /&gt;
| 666 &lt;br /&gt;
|  1733&lt;br /&gt;
|&lt;br /&gt;
|- &lt;br /&gt;
| Accepted as talk &lt;br /&gt;
| 140  &lt;br /&gt;
|  72  &lt;br /&gt;
| 212 &lt;br /&gt;
| 4 &lt;br /&gt;
|- &lt;br /&gt;
| Accepted as poster &lt;br /&gt;
|  141 &lt;br /&gt;
|  70 &lt;br /&gt;
|   211 &lt;br /&gt;
|  5&lt;br /&gt;
|- &lt;br /&gt;
| Total Accepted &lt;br /&gt;
|  281 (26.3%)  &lt;br /&gt;
|  142 (21.3%) &lt;br /&gt;
|  423 (24.4%)   &lt;br /&gt;
| 9&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Area&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Long (%)&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Short (%)&lt;br /&gt;
|-&lt;br /&gt;
| Bio and clinical NLP&lt;br /&gt;
| 7 (57)&lt;br /&gt;
| 28 (17)&lt;br /&gt;
|-&lt;br /&gt;
| Question Answering&lt;br /&gt;
| 73 (36)&lt;br /&gt;
| 41 (17)&lt;br /&gt;
|-&lt;br /&gt;
| Cognitive modeling&lt;br /&gt;
| 24 (29)&lt;br /&gt;
| 14 (14)&lt;br /&gt;
|-&lt;br /&gt;
| Resources and Evaluation&lt;br /&gt;
| 33 (27)&lt;br /&gt;
| 20 (20)&lt;br /&gt;
|-&lt;br /&gt;
| Dialog and Interactive systems&lt;br /&gt;
| 64 (20)&lt;br /&gt;
| 18 (27)&lt;br /&gt;
|- &lt;br /&gt;
| Semantics&lt;br /&gt;
| 80 (13)&lt;br /&gt;
| 42 (11)&lt;br /&gt;
|- &lt;br /&gt;
| Discourse and Pragmatics&lt;br /&gt;
| 38 (21) &lt;br /&gt;
|  11 (36)&lt;br /&gt;
|-&lt;br /&gt;
| Sentiment Analysis&lt;br /&gt;
| 32 (28)&lt;br /&gt;
| 40 (20)&lt;br /&gt;
|-&lt;br /&gt;
| Ethics, Bias and Fairness&lt;br /&gt;
| 16 (25)&lt;br /&gt;
| 12 (50)&lt;br /&gt;
|- &lt;br /&gt;
| Social Media&lt;br /&gt;
| 44 (18)&lt;br /&gt;
| 41 (36)&lt;br /&gt;
|-&lt;br /&gt;
| Generation&lt;br /&gt;
| 46 (14)&lt;br /&gt;
| 19 (23)&lt;br /&gt;
|- &lt;br /&gt;
| Speech&lt;br /&gt;
| 19 (31)&lt;br /&gt;
| 9 (33)&lt;br /&gt;
|-&lt;br /&gt;
| Information Extraction&lt;br /&gt;
| 46 (28)&lt;br /&gt;
| 16 (12)&lt;br /&gt;
|-&lt;br /&gt;
| Style&lt;br /&gt;
| 24 ( (25)&lt;br /&gt;
| 16 (25)&lt;br /&gt;
|-&lt;br /&gt;
| Information Retrieval&lt;br /&gt;
| 22 (22)&lt;br /&gt;
| 13 (30)&lt;br /&gt;
|-&lt;br /&gt;
| Summarization&lt;br /&gt;
| 22 (27)&lt;br /&gt;
| 28 (28)&lt;br /&gt;
|-&lt;br /&gt;
| Machine Learning for NLP&lt;br /&gt;
| 100 (29)&lt;br /&gt;
| 22 (22)&lt;br /&gt;
|-&lt;br /&gt;
| Syntax&lt;br /&gt;
| 36 (52)&lt;br /&gt;
| 54 (13)&lt;br /&gt;
|-&lt;br /&gt;
| Machine Translation&lt;br /&gt;
| 49 (30)&lt;br /&gt;
| 53 (18)&lt;br /&gt;
|-&lt;br /&gt;
| Text Mining&lt;br /&gt;
| 101 (18)&lt;br /&gt;
| 29 (24)&lt;br /&gt;
|-&lt;br /&gt;
| Multilingual NLP&lt;br /&gt;
| 43 (25)&lt;br /&gt;
| 28 (10)&lt;br /&gt;
|- &lt;br /&gt;
| Theory and Formalisms&lt;br /&gt;
| 12 (58)&lt;br /&gt;
| 12 (16)&lt;br /&gt;
|- &lt;br /&gt;
| NLP Applications&lt;br /&gt;
| 60 (30)&lt;br /&gt;
| 41 (17)&lt;br /&gt;
|- &lt;br /&gt;
| Vision &amp;amp; Robotics&lt;br /&gt;
| 41 (12)&lt;br /&gt;
| 22 (36)&lt;br /&gt;
|- &lt;br /&gt;
| Phonology&lt;br /&gt;
| 24 (33)&lt;br /&gt;
| 24 (25)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time. 40 of the 94 area chairs were first time area chairs for NAACL.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
[[File:Profession reviewers.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.&lt;br /&gt;
&lt;br /&gt;
== 	Handling desk rejects ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The graph bellow shows the timeline of first review submissions.&lt;br /&gt;
[[File:review_submissions.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
Regarding the increasing challenge in preserving double blind review, PCs found that the papers whose authors the reviewers could guess were more likely to receive an overall score of 5 or 6, compared to papers whose authors were not identified by the reviewers.&lt;br /&gt;
&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this be something that the NAACL/ACL board runs, and/or be done every few years. [There were ToT awards at ACL 2019 and it looks like this will be happening at ACLs.]&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: &#039;&#039;BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: &#039;&#039;Probing the Need for Visual Context in Multimodal Machine Translation&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: &#039;&#039;CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
* Dec. 10th, 2018: Paper submission deadline (both long and short)&lt;br /&gt;
* Dec. 14-17: Area chairs check papers&lt;br /&gt;
* Dec 20-Jan 2, 2019: Paper bidding window&lt;br /&gt;
* Jan. 3-8: Area chairs review assignment&lt;br /&gt;
* Jan. 9: Review period starts&lt;br /&gt;
* Jan. 29: Reviews due (around 3 weeks for reviewing)&lt;br /&gt;
* Jan. 30-Feb 3: Area chairs chase late reviewers add emergency reviewers&lt;br /&gt;
* Feb 4th-7: Area chairs discussion period&lt;br /&gt;
* Feb 8th-12: Area chairs determine recommendations and enter meta reviews&lt;br /&gt;
* Feb 13-21: Final decisions made&lt;br /&gt;
* Feb 22: Decisions sent to authors&lt;br /&gt;
* March 11: Presentation format recommendations&lt;br /&gt;
* March 18: ACs send best reviewers list&lt;br /&gt;
* March 20-April 8: Best paper selection period&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
;Maintaining anonymity&lt;br /&gt;
: Wording of ACL policies invites reinterpretation (e.g. &amp;quot;are asked not to publicize [the paper] further during the anonymity period – the submitted paper should be as anonymous as possible.&amp;quot;)&lt;br /&gt;
: Open review from overlapping conferences requires Chairs to make ad hoc decisions about whether de-anonymization as part of the review process does or does not violate ACL policies&lt;br /&gt;
: Expectation for transparency at odds with confidential review process (community wants to discuss all aspects of review process in social media)&lt;br /&gt;
&lt;br /&gt;
;Higher volume of papers &amp;amp; participants is straining our infrastructure&lt;br /&gt;
: START tools struggle to support this volume of papers&lt;br /&gt;
: Reviewer overload/burnout&lt;br /&gt;
: Challenges in coordinating logistics with the venue (A/V, coffee, recruiting lunch, video release forms, random people jumping into banquet buses) in the absence of a Local Chair&lt;br /&gt;
&lt;br /&gt;
; Possible solutions&lt;br /&gt;
: Look into sharing reviews for rejected papers with next conferences&lt;br /&gt;
: Revisit using Open Review for *ACL&lt;br /&gt;
: Strict policy on double submissions (like EMNLP)&lt;br /&gt;
&lt;br /&gt;
; Other recommendations&lt;br /&gt;
: Do not print handbooks for all participants, have a smaller number available by request. Post-conference survey indicated that a majority of participants used only the conference app. &lt;br /&gt;
: Have a Local Arrangements Chair for NAACL&lt;br /&gt;
: Revise ACL anonymity and submission policies to remove alternate interpretations and thereby spare PCs time-consuming negotiations with authors&lt;br /&gt;
: Consider moving NAACL to spring so that *ACL timelines are less compressed and NAACL reviewing does not fall over end-of-year holidays&lt;br /&gt;
: More automation of format checks in START &amp;amp; better documentation of the ones that are already there (obscure and buried flags) to ease the desk reject process&lt;br /&gt;
: Allow extension of START COI tools to allow authors to list reviewers who should not be assigned to their paper&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73296</id>
		<title>2019Q3 Reports: NAACL 2019</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73296"/>
		<updated>2019-08-06T21:40:19Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Detailed statistics by area */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks. A/V failures the first day of the conference have made it hard to assess effectiveness. &lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** Link to D&amp;amp;I report will be included when it is available.&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
* Student Research Papers&lt;br /&gt;
: talks and posters from the SRW were integrated into the main conference program. Positive feedback was received about this, better experience for students.&lt;br /&gt;
&lt;br /&gt;
= Submissions rates and distributions =&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Long&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Short&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Total&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;|TACL&lt;br /&gt;
|- &lt;br /&gt;
| Reviewed &lt;br /&gt;
| 1067 &lt;br /&gt;
| 666 &lt;br /&gt;
|  1733&lt;br /&gt;
|&lt;br /&gt;
|- &lt;br /&gt;
| Accepted as talk &lt;br /&gt;
| 140  &lt;br /&gt;
|  72  &lt;br /&gt;
| 212 &lt;br /&gt;
| 4 &lt;br /&gt;
|- &lt;br /&gt;
| Accepted as poster &lt;br /&gt;
|  141 &lt;br /&gt;
|  70 &lt;br /&gt;
|   211 &lt;br /&gt;
|  5&lt;br /&gt;
|- &lt;br /&gt;
| Total Accepted &lt;br /&gt;
|  281 (26.3%)  &lt;br /&gt;
|  142 (21.3%) &lt;br /&gt;
|  423 (24.4%)   &lt;br /&gt;
| 9&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Area&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Long (%)&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Short (%)&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| Bio and clinical NLP&lt;br /&gt;
| 7 (57)&lt;br /&gt;
| 28 (17)&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Question Answering&lt;br /&gt;
| 73 (36)&lt;br /&gt;
| 41 (17)&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Cognitive modeling&lt;br /&gt;
| 24 (29)&lt;br /&gt;
| 14 (14)&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Resources and Evaluation&lt;br /&gt;
| 33 (27)&lt;br /&gt;
| 20 (20)&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Dialog and Interactive systems&lt;br /&gt;
| 64 (20)&lt;br /&gt;
| 18 (27)&lt;br /&gt;
|- &lt;br /&gt;
| &lt;br /&gt;
| Semantics&lt;br /&gt;
| 80 (13)&lt;br /&gt;
| 42 (11)&lt;br /&gt;
|- &lt;br /&gt;
| &lt;br /&gt;
| Discourse and Pragmatics&lt;br /&gt;
| 38 (21) &lt;br /&gt;
|  11 (36)&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Sentiment Analysis&lt;br /&gt;
| 32 (28)&lt;br /&gt;
| 40 (20)&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
| Ethics, Bias and Fairness&lt;br /&gt;
| 16 (25)&lt;br /&gt;
| 12 (50)&lt;br /&gt;
|- &lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
|-&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
|- &lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
|-&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
|-&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
|-&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
|-&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
|-&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
|-&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
|-&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
|-&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
|- &lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
|- &lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
|- &lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
|- &lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time. 40 of the 94 area chairs were first time area chairs for NAACL.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
[[File:Profession reviewers.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.&lt;br /&gt;
&lt;br /&gt;
== 	Handling desk rejects ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The graph bellow shows the timeline of first review submissions.&lt;br /&gt;
[[File:review_submissions.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
Regarding the increasing challenge in preserving double blind review, PCs found that the papers whose authors the reviewers could guess were more likely to receive an overall score of 5 or 6, compared to papers whose authors were not identified by the reviewers.&lt;br /&gt;
&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this be something that the NAACL/ACL board runs, and/or be done every few years. [There were ToT awards at ACL 2019 and it looks like this will be happening at ACLs.]&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: &#039;&#039;BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: &#039;&#039;Probing the Need for Visual Context in Multimodal Machine Translation&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: &#039;&#039;CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
* Dec. 10th, 2018: Paper submission deadline (both long and short)&lt;br /&gt;
* Dec. 14-17: Area chairs check papers&lt;br /&gt;
* Dec 20-Jan 2, 2019: Paper bidding window&lt;br /&gt;
* Jan. 3-8: Area chairs review assignment&lt;br /&gt;
* Jan. 9: Review period starts&lt;br /&gt;
* Jan. 29: Reviews due (around 3 weeks for reviewing)&lt;br /&gt;
* Jan. 30-Feb 3: Area chairs chase late reviewers add emergency reviewers&lt;br /&gt;
* Feb 4th-7: Area chairs discussion period&lt;br /&gt;
* Feb 8th-12: Area chairs determine recommendations and enter meta reviews&lt;br /&gt;
* Feb 13-21: Final decisions made&lt;br /&gt;
* Feb 22: Decisions sent to authors&lt;br /&gt;
* March 11: Presentation format recommendations&lt;br /&gt;
* March 18: ACs send best reviewers list&lt;br /&gt;
* March 20-April 8: Best paper selection period&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
;Maintaining anonymity&lt;br /&gt;
: Wording of ACL policies invites reinterpretation (e.g. &amp;quot;are asked not to publicize [the paper] further during the anonymity period – the submitted paper should be as anonymous as possible.&amp;quot;)&lt;br /&gt;
: Open review from overlapping conferences requires Chairs to make ad hoc decisions about whether de-anonymization as part of the review process does or does not violate ACL policies&lt;br /&gt;
: Expectation for transparency at odds with confidential review process (community wants to discuss all aspects of review process in social media)&lt;br /&gt;
&lt;br /&gt;
;Higher volume of papers &amp;amp; participants is straining our infrastructure&lt;br /&gt;
: START tools struggle to support this volume of papers&lt;br /&gt;
: Reviewer overload/burnout&lt;br /&gt;
: Challenges in coordinating logistics with the venue (A/V, coffee, recruiting lunch, video release forms, random people jumping into banquet buses) in the absence of a Local Chair&lt;br /&gt;
&lt;br /&gt;
; Possible solutions&lt;br /&gt;
: Look into sharing reviews for rejected papers with next conferences&lt;br /&gt;
: Revisit using Open Review for *ACL&lt;br /&gt;
: Strict policy on double submissions (like EMNLP)&lt;br /&gt;
&lt;br /&gt;
; Other recommendations&lt;br /&gt;
: Do not print handbooks for all participants, have a smaller number available by request. Post-conference survey indicated that a majority of participants used only the conference app. &lt;br /&gt;
: Have a Local Arrangements Chair for NAACL&lt;br /&gt;
: Revise ACL anonymity and submission policies to remove alternate interpretations and thereby spare PCs time-consuming negotiations with authors&lt;br /&gt;
: Consider moving NAACL to spring so that *ACL timelines are less compressed and NAACL reviewing does not fall over end-of-year holidays&lt;br /&gt;
: More automation of format checks in START &amp;amp; better documentation of the ones that are already there (obscure and buried flags) to ease the desk reject process&lt;br /&gt;
: Allow extension of START COI tools to allow authors to list reviewers who should not be assigned to their paper&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73295</id>
		<title>2019Q3 Reports: NAACL 2019</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73295"/>
		<updated>2019-08-06T21:35:33Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Detailed statistics by area */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks. A/V failures the first day of the conference have made it hard to assess effectiveness. &lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** Link to D&amp;amp;I report will be included when it is available.&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
* Student Research Papers&lt;br /&gt;
: talks and posters from the SRW were integrated into the main conference program. Positive feedback was received about this, better experience for students.&lt;br /&gt;
&lt;br /&gt;
= Submissions rates and distributions =&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Long&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Short&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Total&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;|TACL&lt;br /&gt;
|- &lt;br /&gt;
| Reviewed &lt;br /&gt;
| 1067 &lt;br /&gt;
| 666 &lt;br /&gt;
|  1733&lt;br /&gt;
|&lt;br /&gt;
|- &lt;br /&gt;
| Accepted as talk &lt;br /&gt;
| 140  &lt;br /&gt;
|  72  &lt;br /&gt;
| 212 &lt;br /&gt;
| 4 &lt;br /&gt;
|- &lt;br /&gt;
| Accepted as poster &lt;br /&gt;
|  141 &lt;br /&gt;
|  70 &lt;br /&gt;
|   211 &lt;br /&gt;
|  5&lt;br /&gt;
|- &lt;br /&gt;
| Total Accepted &lt;br /&gt;
|  281 (26.3%)  &lt;br /&gt;
|  142 (21.3%) &lt;br /&gt;
|  423 (24.4%)   &lt;br /&gt;
| 9&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Area&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Long (%)&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Short (%)&lt;br /&gt;
|-&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
|-&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
|-&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
|-&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
|-&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
|- &lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
|- &lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
|-&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
|-&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
|- &lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
|-&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
|- &lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
|-&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
|-&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
|-&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
|-&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
|-&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
|-&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
|-&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time. 40 of the 94 area chairs were first time area chairs for NAACL.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
[[File:Profession reviewers.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.&lt;br /&gt;
&lt;br /&gt;
== 	Handling desk rejects ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The graph bellow shows the timeline of first review submissions.&lt;br /&gt;
[[File:review_submissions.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
Regarding the increasing challenge in preserving double blind review, PCs found that the papers whose authors the reviewers could guess were more likely to receive an overall score of 5 or 6, compared to papers whose authors were not identified by the reviewers.&lt;br /&gt;
&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this be something that the NAACL/ACL board runs, and/or be done every few years. [There were ToT awards at ACL 2019 and it looks like this will be happening at ACLs.]&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: &#039;&#039;BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: &#039;&#039;Probing the Need for Visual Context in Multimodal Machine Translation&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: &#039;&#039;CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
* Dec. 10th, 2018: Paper submission deadline (both long and short)&lt;br /&gt;
* Dec. 14-17: Area chairs check papers&lt;br /&gt;
* Dec 20-Jan 2, 2019: Paper bidding window&lt;br /&gt;
* Jan. 3-8: Area chairs review assignment&lt;br /&gt;
* Jan. 9: Review period starts&lt;br /&gt;
* Jan. 29: Reviews due (around 3 weeks for reviewing)&lt;br /&gt;
* Jan. 30-Feb 3: Area chairs chase late reviewers add emergency reviewers&lt;br /&gt;
* Feb 4th-7: Area chairs discussion period&lt;br /&gt;
* Feb 8th-12: Area chairs determine recommendations and enter meta reviews&lt;br /&gt;
* Feb 13-21: Final decisions made&lt;br /&gt;
* Feb 22: Decisions sent to authors&lt;br /&gt;
* March 11: Presentation format recommendations&lt;br /&gt;
* March 18: ACs send best reviewers list&lt;br /&gt;
* March 20-April 8: Best paper selection period&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
;Maintaining anonymity&lt;br /&gt;
: Wording of ACL policies invites reinterpretation (e.g. &amp;quot;are asked not to publicize [the paper] further during the anonymity period – the submitted paper should be as anonymous as possible.&amp;quot;)&lt;br /&gt;
: Open review from overlapping conferences requires Chairs to make ad hoc decisions about whether de-anonymization as part of the review process does or does not violate ACL policies&lt;br /&gt;
: Expectation for transparency at odds with confidential review process (community wants to discuss all aspects of review process in social media)&lt;br /&gt;
&lt;br /&gt;
;Higher volume of papers &amp;amp; participants is straining our infrastructure&lt;br /&gt;
: START tools struggle to support this volume of papers&lt;br /&gt;
: Reviewer overload/burnout&lt;br /&gt;
: Challenges in coordinating logistics with the venue (A/V, coffee, recruiting lunch, video release forms, random people jumping into banquet buses) in the absence of a Local Chair&lt;br /&gt;
&lt;br /&gt;
; Possible solutions&lt;br /&gt;
: Look into sharing reviews for rejected papers with next conferences&lt;br /&gt;
: Revisit using Open Review for *ACL&lt;br /&gt;
: Strict policy on double submissions (like EMNLP)&lt;br /&gt;
&lt;br /&gt;
; Other recommendations&lt;br /&gt;
: Do not print handbooks for all participants, have a smaller number available by request. Post-conference survey indicated that a majority of participants used only the conference app. &lt;br /&gt;
: Have a Local Arrangements Chair for NAACL&lt;br /&gt;
: Revise ACL anonymity and submission policies to remove alternate interpretations and thereby spare PCs time-consuming negotiations with authors&lt;br /&gt;
: Consider moving NAACL to spring so that *ACL timelines are less compressed and NAACL reviewing does not fall over end-of-year holidays&lt;br /&gt;
: More automation of format checks in START &amp;amp; better documentation of the ones that are already there (obscure and buried flags) to ease the desk reject process&lt;br /&gt;
: Allow extension of START COI tools to allow authors to list reviewers who should not be assigned to their paper&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73294</id>
		<title>2019Q3 Reports: NAACL 2019</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73294"/>
		<updated>2019-08-06T21:30:29Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Submissions rates and distributions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks. A/V failures the first day of the conference have made it hard to assess effectiveness. &lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** Link to D&amp;amp;I report will be included when it is available.&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
* Student Research Papers&lt;br /&gt;
: talks and posters from the SRW were integrated into the main conference program. Positive feedback was received about this, better experience for students.&lt;br /&gt;
&lt;br /&gt;
= Submissions rates and distributions =&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Long&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Short&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Total&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;|TACL&lt;br /&gt;
|- &lt;br /&gt;
| Reviewed &lt;br /&gt;
| 1067 &lt;br /&gt;
| 666 &lt;br /&gt;
|  1733&lt;br /&gt;
|&lt;br /&gt;
|- &lt;br /&gt;
| Accepted as talk &lt;br /&gt;
| 140  &lt;br /&gt;
|  72  &lt;br /&gt;
| 212 &lt;br /&gt;
| 4 &lt;br /&gt;
|- &lt;br /&gt;
| Accepted as poster &lt;br /&gt;
|  141 &lt;br /&gt;
|  70 &lt;br /&gt;
|   211 &lt;br /&gt;
|  5&lt;br /&gt;
|- &lt;br /&gt;
| Total Accepted &lt;br /&gt;
|  281 (26.3%)  &lt;br /&gt;
|  142 (21.3%) &lt;br /&gt;
|  423 (24.4%)   &lt;br /&gt;
| 9&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time. 40 of the 94 area chairs were first time area chairs for NAACL.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
[[File:Profession reviewers.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.&lt;br /&gt;
&lt;br /&gt;
== 	Handling desk rejects ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The graph bellow shows the timeline of first review submissions.&lt;br /&gt;
[[File:review_submissions.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
Regarding the increasing challenge in preserving double blind review, PCs found that the papers whose authors the reviewers could guess were more likely to receive an overall score of 5 or 6, compared to papers whose authors were not identified by the reviewers.&lt;br /&gt;
&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this be something that the NAACL/ACL board runs, and/or be done every few years. [There were ToT awards at ACL 2019 and it looks like this will be happening at ACLs.]&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: &#039;&#039;BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: &#039;&#039;Probing the Need for Visual Context in Multimodal Machine Translation&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: &#039;&#039;CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
* Dec. 10th, 2018: Paper submission deadline (both long and short)&lt;br /&gt;
* Dec. 14-17: Area chairs check papers&lt;br /&gt;
* Dec 20-Jan 2, 2019: Paper bidding window&lt;br /&gt;
* Jan. 3-8: Area chairs review assignment&lt;br /&gt;
* Jan. 9: Review period starts&lt;br /&gt;
* Jan. 29: Reviews due (around 3 weeks for reviewing)&lt;br /&gt;
* Jan. 30-Feb 3: Area chairs chase late reviewers add emergency reviewers&lt;br /&gt;
* Feb 4th-7: Area chairs discussion period&lt;br /&gt;
* Feb 8th-12: Area chairs determine recommendations and enter meta reviews&lt;br /&gt;
* Feb 13-21: Final decisions made&lt;br /&gt;
* Feb 22: Decisions sent to authors&lt;br /&gt;
* March 11: Presentation format recommendations&lt;br /&gt;
* March 18: ACs send best reviewers list&lt;br /&gt;
* March 20-April 8: Best paper selection period&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
;Maintaining anonymity&lt;br /&gt;
: Wording of ACL policies invites reinterpretation (e.g. &amp;quot;are asked not to publicize [the paper] further during the anonymity period – the submitted paper should be as anonymous as possible.&amp;quot;)&lt;br /&gt;
: Open review from overlapping conferences requires Chairs to make ad hoc decisions about whether de-anonymization as part of the review process does or does not violate ACL policies&lt;br /&gt;
: Expectation for transparency at odds with confidential review process (community wants to discuss all aspects of review process in social media)&lt;br /&gt;
&lt;br /&gt;
;Higher volume of papers &amp;amp; participants is straining our infrastructure&lt;br /&gt;
: START tools struggle to support this volume of papers&lt;br /&gt;
: Reviewer overload/burnout&lt;br /&gt;
: Challenges in coordinating logistics with the venue (A/V, coffee, recruiting lunch, video release forms, random people jumping into banquet buses) in the absence of a Local Chair&lt;br /&gt;
&lt;br /&gt;
; Possible solutions&lt;br /&gt;
: Look into sharing reviews for rejected papers with next conferences&lt;br /&gt;
: Revisit using Open Review for *ACL&lt;br /&gt;
: Strict policy on double submissions (like EMNLP)&lt;br /&gt;
&lt;br /&gt;
; Other recommendations&lt;br /&gt;
: Do not print handbooks for all participants, have a smaller number available by request. Post-conference survey indicated that a majority of participants used only the conference app. &lt;br /&gt;
: Have a Local Arrangements Chair for NAACL&lt;br /&gt;
: Revise ACL anonymity and submission policies to remove alternate interpretations and thereby spare PCs time-consuming negotiations with authors&lt;br /&gt;
: Consider moving NAACL to spring so that *ACL timelines are less compressed and NAACL reviewing does not fall over end-of-year holidays&lt;br /&gt;
: More automation of format checks in START &amp;amp; better documentation of the ones that are already there (obscure and buried flags) to ease the desk reject process&lt;br /&gt;
: Allow extension of START COI tools to allow authors to list reviewers who should not be assigned to their paper&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73293</id>
		<title>2019Q3 Reports: NAACL 2019</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73293"/>
		<updated>2019-08-06T21:22:19Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Submissions rates and distributions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks. A/V failures the first day of the conference have made it hard to assess effectiveness. &lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** Link to D&amp;amp;I report will be included when it is available.&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
* Student Research Papers&lt;br /&gt;
: talks and posters from the SRW were integrated into the main conference program. Positive feedback was received about this, better experience for students.&lt;br /&gt;
&lt;br /&gt;
= Submissions rates and distributions =&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Long&lt;br /&gt;
! Short&lt;br /&gt;
! Total&lt;br /&gt;
! TACL&lt;br /&gt;
|- Reviewed | 1067 | 666 |  1733&lt;br /&gt;
|- Accepted as talk | 140  |  72  | 212 &lt;br /&gt;
|-Accepted as poster |  141 |  70  |   211 |  5&lt;br /&gt;
|-Total Accepted |  281 (26.3\%)  |  142 (21.3\%) |  423 (24.4\%) | 9&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time. 40 of the 94 area chairs were first time area chairs for NAACL.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
[[File:Profession reviewers.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.&lt;br /&gt;
&lt;br /&gt;
== 	Handling desk rejects ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The graph bellow shows the timeline of first review submissions.&lt;br /&gt;
[[File:review_submissions.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
Regarding the increasing challenge in preserving double blind review, PCs found that the papers whose authors the reviewers could guess were more likely to receive an overall score of 5 or 6, compared to papers whose authors were not identified by the reviewers.&lt;br /&gt;
&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this be something that the NAACL/ACL board runs, and/or be done every few years. [There were ToT awards at ACL 2019 and it looks like this will be happening at ACLs.]&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: &#039;&#039;BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: &#039;&#039;Probing the Need for Visual Context in Multimodal Machine Translation&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: &#039;&#039;CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
* Dec. 10th, 2018: Paper submission deadline (both long and short)&lt;br /&gt;
* Dec. 14-17: Area chairs check papers&lt;br /&gt;
* Dec 20-Jan 2, 2019: Paper bidding window&lt;br /&gt;
* Jan. 3-8: Area chairs review assignment&lt;br /&gt;
* Jan. 9: Review period starts&lt;br /&gt;
* Jan. 29: Reviews due (around 3 weeks for reviewing)&lt;br /&gt;
* Jan. 30-Feb 3: Area chairs chase late reviewers add emergency reviewers&lt;br /&gt;
* Feb 4th-7: Area chairs discussion period&lt;br /&gt;
* Feb 8th-12: Area chairs determine recommendations and enter meta reviews&lt;br /&gt;
* Feb 13-21: Final decisions made&lt;br /&gt;
* Feb 22: Decisions sent to authors&lt;br /&gt;
* March 11: Presentation format recommendations&lt;br /&gt;
* March 18: ACs send best reviewers list&lt;br /&gt;
* March 20-April 8: Best paper selection period&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
;Maintaining anonymity&lt;br /&gt;
: Wording of ACL policies invites reinterpretation (e.g. &amp;quot;are asked not to publicize [the paper] further during the anonymity period – the submitted paper should be as anonymous as possible.&amp;quot;)&lt;br /&gt;
: Open review from overlapping conferences requires Chairs to make ad hoc decisions about whether de-anonymization as part of the review process does or does not violate ACL policies&lt;br /&gt;
: Expectation for transparency at odds with confidential review process (community wants to discuss all aspects of review process in social media)&lt;br /&gt;
&lt;br /&gt;
;Higher volume of papers &amp;amp; participants is straining our infrastructure&lt;br /&gt;
: START tools struggle to support this volume of papers&lt;br /&gt;
: Reviewer overload/burnout&lt;br /&gt;
: Challenges in coordinating logistics with the venue (A/V, coffee, recruiting lunch, video release forms, random people jumping into banquet buses) in the absence of a Local Chair&lt;br /&gt;
&lt;br /&gt;
; Possible solutions&lt;br /&gt;
: Look into sharing reviews for rejected papers with next conferences&lt;br /&gt;
: Revisit using Open Review for *ACL&lt;br /&gt;
: Strict policy on double submissions (like EMNLP)&lt;br /&gt;
&lt;br /&gt;
; Other recommendations&lt;br /&gt;
: Do not print handbooks for all participants, have a smaller number available by request. Post-conference survey indicated that a majority of participants used only the conference app. &lt;br /&gt;
: Have a Local Arrangements Chair for NAACL&lt;br /&gt;
: Revise ACL anonymity and submission policies to remove alternate interpretations and thereby spare PCs time-consuming negotiations with authors&lt;br /&gt;
: Consider moving NAACL to spring so that *ACL timelines are less compressed and NAACL reviewing does not fall over end-of-year holidays&lt;br /&gt;
: More automation of format checks in START &amp;amp; better documentation of the ones that are already there (obscure and buried flags) to ease the desk reject process&lt;br /&gt;
: Allow extension of START COI tools to allow authors to list reviewers who should not be assigned to their paper&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73292</id>
		<title>2019Q3 Reports: NAACL 2019</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73292"/>
		<updated>2019-08-06T21:10:36Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Issues and recommendations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks. A/V failures the first day of the conference have made it hard to assess effectiveness. &lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** Link to D&amp;amp;I report will be included when it is available.&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
* Student Research Papers&lt;br /&gt;
: talks and posters from the SRW were integrated into the main conference program. Positive feedback was received about this, better experience for students.&lt;br /&gt;
&lt;br /&gt;
= Submissions rates and distributions =&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time. 40 of the 94 area chairs were first time area chairs for NAACL.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
[[File:Profession reviewers.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.&lt;br /&gt;
&lt;br /&gt;
== 	Handling desk rejects ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The graph bellow shows the timeline of first review submissions.&lt;br /&gt;
[[File:review_submissions.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
Regarding the increasing challenge in preserving double blind review, PCs found that the papers whose authors the reviewers could guess were more likely to receive an overall score of 5 or 6, compared to papers whose authors were not identified by the reviewers.&lt;br /&gt;
&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this be something that the NAACL/ACL board runs, and/or be done every few years. [There were ToT awards at ACL 2019 and it looks like this will be happening at ACLs.]&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: &#039;&#039;BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: &#039;&#039;Probing the Need for Visual Context in Multimodal Machine Translation&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: &#039;&#039;CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
* Dec. 10th, 2018: Paper submission deadline (both long and short)&lt;br /&gt;
* Dec. 14-17: Area chairs check papers&lt;br /&gt;
* Dec 20-Jan 2, 2019: Paper bidding window&lt;br /&gt;
* Jan. 3-8: Area chairs review assignment&lt;br /&gt;
* Jan. 9: Review period starts&lt;br /&gt;
* Jan. 29: Reviews due (around 3 weeks for reviewing)&lt;br /&gt;
* Jan. 30-Feb 3: Area chairs chase late reviewers add emergency reviewers&lt;br /&gt;
* Feb 4th-7: Area chairs discussion period&lt;br /&gt;
* Feb 8th-12: Area chairs determine recommendations and enter meta reviews&lt;br /&gt;
* Feb 13-21: Final decisions made&lt;br /&gt;
* Feb 22: Decisions sent to authors&lt;br /&gt;
* March 11: Presentation format recommendations&lt;br /&gt;
* March 18: ACs send best reviewers list&lt;br /&gt;
* March 20-April 8: Best paper selection period&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
;Maintaining anonymity&lt;br /&gt;
: Wording of ACL policies invites reinterpretation (e.g. &amp;quot;are asked not to publicize [the paper] further during the anonymity period – the submitted paper should be as anonymous as possible.&amp;quot;)&lt;br /&gt;
: Open review from overlapping conferences requires Chairs to make ad hoc decisions about whether de-anonymization as part of the review process does or does not violate ACL policies&lt;br /&gt;
: Expectation for transparency at odds with confidential review process (community wants to discuss all aspects of review process in social media)&lt;br /&gt;
&lt;br /&gt;
;Higher volume of papers &amp;amp; participants is straining our infrastructure&lt;br /&gt;
: START tools struggle to support this volume of papers&lt;br /&gt;
: Reviewer overload/burnout&lt;br /&gt;
: Challenges in coordinating logistics with the venue (A/V, coffee, recruiting lunch, video release forms, random people jumping into banquet buses) in the absence of a Local Chair&lt;br /&gt;
&lt;br /&gt;
; Possible solutions&lt;br /&gt;
: Look into sharing reviews for rejected papers with next conferences&lt;br /&gt;
: Revisit using Open Review for *ACL&lt;br /&gt;
: Strict policy on double submissions (like EMNLP)&lt;br /&gt;
&lt;br /&gt;
; Other recommendations&lt;br /&gt;
: Do not print handbooks for all participants, have a smaller number available by request. Post-conference survey indicated that a majority of participants used only the conference app. &lt;br /&gt;
: Have a Local Arrangements Chair for NAACL&lt;br /&gt;
: Revise ACL anonymity and submission policies to remove alternate interpretations and thereby spare PCs time-consuming negotiations with authors&lt;br /&gt;
: Consider moving NAACL to spring so that *ACL timelines are less compressed and NAACL reviewing does not fall over end-of-year holidays&lt;br /&gt;
: More automation of format checks in START &amp;amp; better documentation of the ones that are already there (obscure and buried flags) to ease the desk reject process&lt;br /&gt;
: Allow extension of START COI tools to allow authors to list reviewers who should not be assigned to their paper&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73291</id>
		<title>2019Q3 Reports: NAACL 2019</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73291"/>
		<updated>2019-08-06T21:01:27Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Issues and recommendations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks. A/V failures the first day of the conference have made it hard to assess effectiveness. &lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** Link to D&amp;amp;I report will be included when it is available.&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
* Student Research Papers&lt;br /&gt;
: talks and posters from the SRW were integrated into the main conference program. Positive feedback was received about this, better experience for students.&lt;br /&gt;
&lt;br /&gt;
= Submissions rates and distributions =&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time. 40 of the 94 area chairs were first time area chairs for NAACL.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
[[File:Profession reviewers.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.&lt;br /&gt;
&lt;br /&gt;
== 	Handling desk rejects ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The graph bellow shows the timeline of first review submissions.&lt;br /&gt;
[[File:review_submissions.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
Regarding the increasing challenge in preserving double blind review, PCs found that the papers whose authors the reviewers could guess were more likely to receive an overall score of 5 or 6, compared to papers whose authors were not identified by the reviewers.&lt;br /&gt;
&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this be something that the NAACL/ACL board runs, and/or be done every few years. [There were ToT awards at ACL 2019 and it looks like this will be happening at ACLs.]&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: &#039;&#039;BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: &#039;&#039;Probing the Need for Visual Context in Multimodal Machine Translation&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: &#039;&#039;CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
* Dec. 10th, 2018: Paper submission deadline (both long and short)&lt;br /&gt;
* Dec. 14-17: Area chairs check papers&lt;br /&gt;
* Dec 20-Jan 2, 2019: Paper bidding window&lt;br /&gt;
* Jan. 3-8: Area chairs review assignment&lt;br /&gt;
* Jan. 9: Review period starts&lt;br /&gt;
* Jan. 29: Reviews due (around 3 weeks for reviewing)&lt;br /&gt;
* Jan. 30-Feb 3: Area chairs chase late reviewers add emergency reviewers&lt;br /&gt;
* Feb 4th-7: Area chairs discussion period&lt;br /&gt;
* Feb 8th-12: Area chairs determine recommendations and enter meta reviews&lt;br /&gt;
* Feb 13-21: Final decisions made&lt;br /&gt;
* Feb 22: Decisions sent to authors&lt;br /&gt;
* March 11: Presentation format recommendations&lt;br /&gt;
* March 18: ACs send best reviewers list&lt;br /&gt;
* March 20-April 8: Best paper selection period&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
;Maintaining anonymity&lt;br /&gt;
: Wording of ACL policies invites reinterpretation (e.g. &amp;quot;are asked not to publicize [the paper] further during the anonymity period – the submitted paper should be as anonymous as possible.&amp;quot;)&lt;br /&gt;
: Open review from overlapping conferences requires Chairs to make ad hoc decisions about whether de-anonymization as part of the review process does or does not violate ACL policies&lt;br /&gt;
: Expectation for transparency at odds with confidential review process (community wants to discuss all aspects of review process in social media)&lt;br /&gt;
&lt;br /&gt;
;Higher volume of papers &amp;amp; participants is straining our infrastructure&lt;br /&gt;
: START tools struggle to support this volume of papers&lt;br /&gt;
: Reviewer overload/burnout&lt;br /&gt;
: Challenges in coordinating logistics with the venue (A/V, coffee, recruiting lunch, video release forms, random people jumping into banquet buses) in the absence of a Local Chair&lt;br /&gt;
&lt;br /&gt;
; Possible solutions&lt;br /&gt;
: Look into sharing reviews for rejected papers with next conferences&lt;br /&gt;
: Revisit using Open Review for *ACL&lt;br /&gt;
: Strict policy on double submissions (like EMNLP)&lt;br /&gt;
&lt;br /&gt;
; Other recommendations&lt;br /&gt;
: Do not print handbooks for all participants, have a smaller number available by request. Post-conference survey indicated that a majority of participants used only the conference app. &lt;br /&gt;
: Have a Local Arrangements Chair for NAACL&lt;br /&gt;
: Revise ACL anonymity and submission policies to remove alternate interpretations and thereby spare PCs time-consuming negotiations with authors&lt;br /&gt;
: Consider moving NAACL to spring so that *ACL timelines are less compressed and NAACL reviewing does not fall over end-of-year holidays&lt;br /&gt;
: More automation of format checks in START &amp;amp; better documentation of the ones that are already there (obscure and buried flags) to ease the desk reject process&lt;br /&gt;
:&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73285</id>
		<title>2019Q3 Reports: NAACL 2019</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73285"/>
		<updated>2019-08-06T20:22:32Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Best paper awards */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks. A/V failures the first day of the conference have made it hard to assess effectiveness. &lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** Link to D&amp;amp;I report will be included when it is available.&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
* Student Research Papers&lt;br /&gt;
: talks and posters from the SRW were integrated into the main conference program. Positive feedback was received about this, better experience for students.&lt;br /&gt;
&lt;br /&gt;
= Submissions rates and distributions =&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time. 40 of the 94 area chairs were first time area chairs for NAACL.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
[[File:Profession reviewers.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.&lt;br /&gt;
&lt;br /&gt;
== 	Handling desk rejects ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The graph bellow shows the timeline of first review submissions.&lt;br /&gt;
[[File:review_submissions.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
Regarding the increasing challenge in preserving double blind review, PCs found that the papers whose authors the reviewers could guess were more likely to receive an overall score of 5 or 6, compared to papers whose authors were not identified by the reviewers.&lt;br /&gt;
&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this be something that the NAACL/ACL board runs, and/or be done every few years. [There were ToT awards at ACL 2019 and it looks like this will be happening at ACLs.]&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: &#039;&#039;BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: &#039;&#039;Probing the Need for Visual Context in Multimodal Machine Translation&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: &#039;&#039;CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
Needs editing&lt;br /&gt;
&lt;br /&gt;
Maintaining anonymity is becoming increasingly difficult&lt;br /&gt;
Open review from overlapping conferences&lt;br /&gt;
Expectation for transparency at odds with confidential review process&lt;br /&gt;
Higher volume of papers is straining our infrastructure&lt;br /&gt;
START tools struggle to support this volume of papers&lt;br /&gt;
Reviewer overload/burnout&lt;br /&gt;
Look into sharing reviews for rejected papers with next conferences&lt;br /&gt;
Revist using Open Review for *ACL&lt;br /&gt;
Additional lessons learned&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73284</id>
		<title>2019Q3 Reports: NAACL 2019</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73284"/>
		<updated>2019-08-06T20:19:59Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Review process */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks. A/V failures the first day of the conference have made it hard to assess effectiveness. &lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** Link to D&amp;amp;I report will be included when it is available.&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
* Student Research Papers&lt;br /&gt;
: talks and posters from the SRW were integrated into the main conference program. Positive feedback was received about this, better experience for students.&lt;br /&gt;
&lt;br /&gt;
= Submissions rates and distributions =&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time. 40 of the 94 area chairs were first time area chairs for NAACL.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
[[File:Profession reviewers.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.&lt;br /&gt;
&lt;br /&gt;
== 	Handling desk rejects ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The graph bellow shows the timeline of first review submissions.&lt;br /&gt;
[[File:review_submissions.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
Regarding the increasing challenge in preserving double blind review, PCs found that the papers whose authors the reviewers could guess were more likely to receive an overall score of 5 or 6, compared to papers whose authors were not identified by the reviewers.&lt;br /&gt;
&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this be something that the NAACL/ACL board runs, and/or be done every few years. [There were ToT awards at ACL 2019 and it looks like this will be happening at ACLs.]&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
Needs editing&lt;br /&gt;
&lt;br /&gt;
Maintaining anonymity is becoming increasingly difficult&lt;br /&gt;
Open review from overlapping conferences&lt;br /&gt;
Expectation for transparency at odds with confidential review process&lt;br /&gt;
Higher volume of papers is straining our infrastructure&lt;br /&gt;
START tools struggle to support this volume of papers&lt;br /&gt;
Reviewer overload/burnout&lt;br /&gt;
Look into sharing reviews for rejected papers with next conferences&lt;br /&gt;
Revist using Open Review for *ACL&lt;br /&gt;
Additional lessons learned&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73283</id>
		<title>2019Q3 Reports: NAACL 2019</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_NAACL_2019&amp;diff=73283"/>
		<updated>2019-08-06T20:13:50Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks. A/V failures the first day of the conference have made it hard to assess effectiveness. &lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** Link to D&amp;amp;I report will be included when it is available.&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
* Student Research Papers&lt;br /&gt;
: talks and posters from the SRW were integrated into the main conference program. Positive feedback was received about this, better experience for students.&lt;br /&gt;
&lt;br /&gt;
= Submissions rates and distributions =&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time. 40 of the 94 area chairs were first time area chairs for NAACL.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
[[File:Profession reviewers.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.&lt;br /&gt;
&lt;br /&gt;
== 	Handling desk rejects ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The graph bellow shows the timeline of first review submissions.&lt;br /&gt;
[[File:review_submissions.png|1000px|]]&lt;br /&gt;
&lt;br /&gt;
Regarding the increasing challenge in preserving double blind review, PCs found that the papers whose authors the reviewers could guess were more likely to receive an overall score of 5 or 6, compared to papers whose authors were not identified by the reviewers.&lt;br /&gt;
&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this be something that the NAACL/ACL board runs, and/or be done every few years.&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
Needs editing&lt;br /&gt;
&lt;br /&gt;
Maintaining anonymity is becoming increasingly difficult&lt;br /&gt;
Open review from overlapping conferences&lt;br /&gt;
Expectation for transparency at odds with confidential review process&lt;br /&gt;
Higher volume of papers is straining our infrastructure&lt;br /&gt;
START tools struggle to support this volume of papers&lt;br /&gt;
Reviewer overload/burnout&lt;br /&gt;
Look into sharing reviews for rejected papers with next conferences&lt;br /&gt;
Revist using Open Review for *ACL&lt;br /&gt;
Additional lessons learned&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73131</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73131"/>
		<updated>2019-07-19T19:52:22Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Issues and recommendations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** INSERT LINK TO THEIR REPORT&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
= Submissions rates and distributions =&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
THERE IS ANOTHER GRAPHIC TO INSERT HERE&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== 	Handling desk rejects ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To be added :&#039;&#039;&#039; X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window?&lt;br /&gt;
&lt;br /&gt;
MORE GRAPHICS TO BE ADDED HERE&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
Needs editing&lt;br /&gt;
&lt;br /&gt;
Maintaining anonymity is becoming increasingly difficult&lt;br /&gt;
Open review from overlapping conferences&lt;br /&gt;
Expectation for transparency at odds with confidential review process&lt;br /&gt;
Higher volume of papers is straining our infrastructure&lt;br /&gt;
START tools struggle to support this volume of papers&lt;br /&gt;
Reviewer overload/burnout&lt;br /&gt;
Look into sharing reviews for rejected papers with next conferences&lt;br /&gt;
Revist using Open Review for *ACL&lt;br /&gt;
Additional lessons learned&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73130</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73130"/>
		<updated>2019-07-19T19:51:53Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Issues and recommendations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** INSERT LINK TO THEIR REPORT&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
= Submissions rates and distributions =&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
THERE IS ANOTHER GRAPHIC TO INSERT HERE&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== 	Handling desk rejects ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To be added :&#039;&#039;&#039; X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window?&lt;br /&gt;
&lt;br /&gt;
MORE GRAPHICS TO BE ADDED HERE&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
Needs editing&lt;br /&gt;
&lt;br /&gt;
Maintaining anonymity is becoming increasingly difficult&lt;br /&gt;
Open review from overlapping conferences&lt;br /&gt;
Expectation for transparency at odds with confidential review process&lt;br /&gt;
Higher volume of papers is straining our infrastructure&lt;br /&gt;
START tools struggle to support this volume of papers&lt;br /&gt;
Reviewer overload/burnout&lt;br /&gt;
Look into sharing reviews for rejected papers with next conferences&lt;br /&gt;
Revist using Open Review for *ACL&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73129</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73129"/>
		<updated>2019-07-19T19:50:03Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Reviewing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** INSERT LINK TO THEIR REPORT&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
= Submissions rates and distributions =&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
THERE IS ANOTHER GRAPHIC TO INSERT HERE&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== 	Handling desk rejects ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To be added :&#039;&#039;&#039; X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window?&lt;br /&gt;
&lt;br /&gt;
MORE GRAPHICS TO BE ADDED HERE&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73127</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73127"/>
		<updated>2019-07-19T19:14:07Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Deciding on the reject-without-review papers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** INSERT LINK TO THEIR REPORT&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
= Submissions rates and distributions =&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
THERE IS ANOTHER GRAPHIC TO INSERT HERE&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== 	Handling desk rejects ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To be added :&#039;&#039;&#039; X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window?&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73126</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73126"/>
		<updated>2019-07-19T19:13:42Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Reviewing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** INSERT LINK TO THEIR REPORT&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
= Submissions rates and distributions =&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
THERE IS ANOTHER GRAPHIC TO INSERT HERE&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== 	Deciding on the reject-without-review papers ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To be added :&#039;&#039;&#039; X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window?&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73125</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73125"/>
		<updated>2019-07-19T19:13:09Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* An overview of submissions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** INSERT LINK TO THEIR REPORT&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
= Submissions rates and distributions =&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
THERE IS ANOTHER GRAPHIC TO INSERT HERE&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== 	Deciding on the reject-without-review papers ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant we needed 5 parallel tracks to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consists of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers are integrated into the program, and are marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
INSERT CHART&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To be added :&#039;&#039;&#039; X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window?&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73124</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73124"/>
		<updated>2019-07-19T19:11:31Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Main Innovations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** INSERT LINK TO THEIR REPORT&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
= An overview of submissions =&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
THERE IS ANOTHER GRAPHIC TO INSERT HERE&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== 	Deciding on the reject-without-review papers ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant we needed 5 parallel tracks to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consists of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers are integrated into the program, and are marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
INSERT CHART&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To be added :&#039;&#039;&#039; X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window?&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73123</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73123"/>
		<updated>2019-07-19T19:10:48Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Main Innovations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
** INSERT LINK TO THEIR REPORT&lt;br /&gt;
&lt;br /&gt;
*	Two-stage Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
:: Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
::  Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
== An overview of submissions ==&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
THERE IS ANOTHER GRAPHIC TO INSERT HERE&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== 	Deciding on the reject-without-review papers ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant we needed 5 parallel tracks to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consists of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers are integrated into the program, and are marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
INSERT CHART&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To be added :&#039;&#039;&#039; X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window?&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73122</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73122"/>
		<updated>2019-07-19T19:05:35Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Main Innovations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
:The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
:Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
: This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
: Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* Diversity &amp;amp; Inclusion Organization&lt;br /&gt;
The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
&amp;lt;bunch of others, pull from their report&amp;gt;?&lt;br /&gt;
&lt;br /&gt;
* 	Submissions &lt;br /&gt;
:This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
** Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
** Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
== An overview of statistics ==&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
THERE IS ANOTHER GRAPHIC TO INSERT HERE&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== 	Deciding on the reject-without-review papers ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant we needed 5 parallel tracks to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consists of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers are integrated into the program, and are marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
INSERT CHART&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To be added :&#039;&#039;&#039; X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window?&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73121</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73121"/>
		<updated>2019-07-19T19:03:51Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Program Committee */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Zornitsa Kozareva, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sujith Ravi, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael White, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wei Xu, Ohio State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	David McClosky, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&amp;lt;br /&amp;gt;&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&amp;lt;br /&amp;gt;&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
	Daniel Cer, Google Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&amp;lt;br /&amp;gt;&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&amp;lt;br /&amp;gt;&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&amp;lt;br /&amp;gt;&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&amp;lt;br /&amp;gt;&lt;br /&gt;
Yansong Feng, Peking University, China&amp;lt;br /&amp;gt;&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Samuel Bowman, New York University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&amp;lt;br /&amp;gt;&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&amp;lt;br /&amp;gt;&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&amp;lt;br /&amp;gt;&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&amp;lt;br /&amp;gt;&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&amp;lt;br /&amp;gt;&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&amp;lt;br /&amp;gt;&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Fei Liu, University of Central Florida, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&amp;lt;br /&amp;gt;&lt;br /&gt;
Agata Savary, University of Tours, France&amp;lt;br /&amp;gt;&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Anna Feldman, Montclair State University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Kevin Small, Amazon, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&amp;lt;br /&amp;gt;&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* Diversity &amp;amp; Inclusion Organization&lt;br /&gt;
The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
&amp;lt;bunch of others, pull from their report&amp;gt;?&lt;br /&gt;
&lt;br /&gt;
* 	Submissions &lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
** Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
** Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
== An overview of statistics ==&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
THERE IS ANOTHER GRAPHIC TO INSERT HERE&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== 	Deciding on the reject-without-review papers ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant we needed 5 parallel tracks to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consists of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers are integrated into the program, and are marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
INSERT CHART&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To be added :&#039;&#039;&#039; X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window?&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73120</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73120"/>
		<updated>2019-07-19T18:49:40Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
== Area Chairs ==&lt;br /&gt;
&lt;br /&gt;
=== Biomedical NLP &amp;amp; Clinical Text Processing === &lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
=== Cognitive Modeling – Psycholinguistics === &lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
=== Dialog and Interactive systems === &lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&lt;br /&gt;
Zornitsa Kozareva, Google, USA&lt;br /&gt;
Sujith Ravi, Google, USA&lt;br /&gt;
Michael White, Ohio State University, USA&lt;br /&gt;
=== Discourse and Pragmatics === &lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
=== Ethics, Bias and Fairness === &lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
=== Generation === &lt;br /&gt;
He He, Amazon Web Services, USA&lt;br /&gt;
	Wei Xu, Ohio State University, USA&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
=== Information Extraction === &lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
	David McClosky, Google, USA&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
=== Information Retrieval === &lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
=== Machine Learning for NLP === &lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
=== Machine Translation === &lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&lt;br /&gt;
	Daniel Cer, Google Research, USA&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
=== Mixed Topics === &lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
=== Multilingualism, Cross lingual resources === &lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
=== NLP Applications === &lt;br /&gt;
T. J. Hazen, Microsoft, USA&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
=== Phonology, Morphology and Word Segmentation === &lt;br /&gt;
Ramy Eskander, Columbia University, USA&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
=== Question Answering === &lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&lt;br /&gt;
Yansong Feng, Peking University, China&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
=== Resources and Evaluation === &lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
=== Semantics === &lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&lt;br /&gt;
Samuel Bowman, New York University, USA&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
=== Sentiment Analysis === &lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
=== Social Media === &lt;br /&gt;
Dan Goldwasser, Purdue University, USA&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
=== Speech === &lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
=== Style === &lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
=== Summarization === &lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&lt;br /&gt;
Fei Liu, University of Central Florida, USA&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
=== Tagging, Chunking, Syntax and Parsing === &lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&lt;br /&gt;
Agata Savary, University of Tours, France&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
=== Text Mining === &lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&lt;br /&gt;
Anna Feldman, Montclair State University, USA&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&lt;br /&gt;
Kevin Small, Amazon, USA&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
=== Theory and Formalisms === &lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
=== Vision, Robotics and other grounding === &lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* Diversity &amp;amp; Inclusion Organization&lt;br /&gt;
The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
&amp;lt;bunch of others, pull from their report&amp;gt;?&lt;br /&gt;
&lt;br /&gt;
* 	Submissions &lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
** Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
** Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
== An overview of statistics ==&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
THERE IS ANOTHER GRAPHIC TO INSERT HERE&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== 	Deciding on the reject-without-review papers ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant we needed 5 parallel tracks to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consists of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers are integrated into the program, and are marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
INSERT CHART&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To be added :&#039;&#039;&#039; X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window?&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73119</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73119"/>
		<updated>2019-07-19T18:44:06Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Timeline */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
= Area Chairs =&lt;br /&gt;
== FORMATTING TBD ==&lt;br /&gt;
Biomedical NLP &amp;amp; Clinical Text Processing&lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
Cognitive Modeling – Psycholinguistics&lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&lt;br /&gt;
Zornitsa Kozareva, Google, USA&lt;br /&gt;
Sujith Ravi, Google, USA&lt;br /&gt;
Michael White, Ohio State University, USA&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
Generation&lt;br /&gt;
He He, Amazon Web Services, USA&lt;br /&gt;
	Wei Xu, Ohio State University, USA&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
Information Extraction&lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
	David McClosky, Google, USA&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
Information Retrieval&lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
Machine Translation&lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&lt;br /&gt;
	Daniel Cer, Google Research, USA&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
Mixed Topics&lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
Multilingualism, Cross lingual resources&lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
NLP Applications&lt;br /&gt;
T. J. Hazen, Microsoft, USA&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
Phonology, Morphology and Word Segmentation&lt;br /&gt;
Ramy Eskander, Columbia University, USA&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
Question Answering&lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&lt;br /&gt;
Yansong Feng, Peking University, China&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
Semantics&lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&lt;br /&gt;
Samuel Bowman, New York University, USA&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
Social Media&lt;br /&gt;
Dan Goldwasser, Purdue University, USA&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
Speech&lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
Style&lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
Summarization&lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&lt;br /&gt;
Fei Liu, University of Central Florida, USA&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
Tagging, Chunking, Syntax and Parsing&lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&lt;br /&gt;
Agata Savary, University of Tours, France&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
Text Mining&lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&lt;br /&gt;
Anna Feldman, Montclair State University, USA&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&lt;br /&gt;
Kevin Small, Amazon, USA&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
Vision, Robotics and other grounding&lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* Diversity &amp;amp; Inclusion Organization&lt;br /&gt;
The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
&amp;lt;bunch of others, pull from their report&amp;gt;?&lt;br /&gt;
&lt;br /&gt;
* 	Submissions &lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
** Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
** Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
== An overview of statistics ==&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
THERE IS ANOTHER GRAPHIC TO INSERT HERE&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== 	Deciding on the reject-without-review papers ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant we needed 5 parallel tracks to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consists of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers are integrated into the program, and are marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
INSERT CHART&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To be added :&#039;&#039;&#039; X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window?&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73118</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73118"/>
		<updated>2019-07-19T18:43:39Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Review Process */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
= Area Chairs =&lt;br /&gt;
== FORMATTING TBD ==&lt;br /&gt;
Biomedical NLP &amp;amp; Clinical Text Processing&lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
Cognitive Modeling – Psycholinguistics&lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&lt;br /&gt;
Zornitsa Kozareva, Google, USA&lt;br /&gt;
Sujith Ravi, Google, USA&lt;br /&gt;
Michael White, Ohio State University, USA&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
Generation&lt;br /&gt;
He He, Amazon Web Services, USA&lt;br /&gt;
	Wei Xu, Ohio State University, USA&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
Information Extraction&lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
	David McClosky, Google, USA&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
Information Retrieval&lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
Machine Translation&lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&lt;br /&gt;
	Daniel Cer, Google Research, USA&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
Mixed Topics&lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
Multilingualism, Cross lingual resources&lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
NLP Applications&lt;br /&gt;
T. J. Hazen, Microsoft, USA&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
Phonology, Morphology and Word Segmentation&lt;br /&gt;
Ramy Eskander, Columbia University, USA&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
Question Answering&lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&lt;br /&gt;
Yansong Feng, Peking University, China&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
Semantics&lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&lt;br /&gt;
Samuel Bowman, New York University, USA&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
Social Media&lt;br /&gt;
Dan Goldwasser, Purdue University, USA&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
Speech&lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
Style&lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
Summarization&lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&lt;br /&gt;
Fei Liu, University of Central Florida, USA&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
Tagging, Chunking, Syntax and Parsing&lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&lt;br /&gt;
Agata Savary, University of Tours, France&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
Text Mining&lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&lt;br /&gt;
Anna Feldman, Montclair State University, USA&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&lt;br /&gt;
Kevin Small, Amazon, USA&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
Vision, Robotics and other grounding&lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* Diversity &amp;amp; Inclusion Organization&lt;br /&gt;
The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
&amp;lt;bunch of others, pull from their report&amp;gt;?&lt;br /&gt;
&lt;br /&gt;
* 	Submissions &lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
** Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
** Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
== An overview of statistics ==&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Reviewing =&lt;br /&gt;
&lt;br /&gt;
==	Recruiting ACs and Reviewers  ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. There were 25 specific areas + one for “Mixed Topics” and at least 2 ACs per topic area. After the abstract deadline, we added more ACs to teams with larger than predicted submissions . Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
THERE IS ANOTHER GRAPHIC TO INSERT HERE&lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== 	Deciding on the reject-without-review papers ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant we needed 5 parallel tracks to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consists of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers are integrated into the program, and are marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
INSERT CHART&lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To be added :&#039;&#039;&#039; X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window?&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73115</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73115"/>
		<updated>2019-07-19T18:29:37Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Review Process */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
= Area Chairs =&lt;br /&gt;
== FORMATTING TBD ==&lt;br /&gt;
Biomedical NLP &amp;amp; Clinical Text Processing&lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
Cognitive Modeling – Psycholinguistics&lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&lt;br /&gt;
Zornitsa Kozareva, Google, USA&lt;br /&gt;
Sujith Ravi, Google, USA&lt;br /&gt;
Michael White, Ohio State University, USA&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
Generation&lt;br /&gt;
He He, Amazon Web Services, USA&lt;br /&gt;
	Wei Xu, Ohio State University, USA&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
Information Extraction&lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
	David McClosky, Google, USA&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
Information Retrieval&lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
Machine Translation&lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&lt;br /&gt;
	Daniel Cer, Google Research, USA&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
Mixed Topics&lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
Multilingualism, Cross lingual resources&lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
NLP Applications&lt;br /&gt;
T. J. Hazen, Microsoft, USA&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
Phonology, Morphology and Word Segmentation&lt;br /&gt;
Ramy Eskander, Columbia University, USA&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
Question Answering&lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&lt;br /&gt;
Yansong Feng, Peking University, China&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
Semantics&lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&lt;br /&gt;
Samuel Bowman, New York University, USA&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
Social Media&lt;br /&gt;
Dan Goldwasser, Purdue University, USA&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
Speech&lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
Style&lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
Summarization&lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&lt;br /&gt;
Fei Liu, University of Central Florida, USA&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
Tagging, Chunking, Syntax and Parsing&lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&lt;br /&gt;
Agata Savary, University of Tours, France&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
Text Mining&lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&lt;br /&gt;
Anna Feldman, Montclair State University, USA&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&lt;br /&gt;
Kevin Small, Amazon, USA&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
Vision, Robotics and other grounding&lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* Diversity &amp;amp; Inclusion Organization&lt;br /&gt;
The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
&amp;lt;bunch of others, pull from their report&amp;gt;?&lt;br /&gt;
&lt;br /&gt;
* 	Submissions &lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
** Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
** Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
== An overview of statistics ==&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Review Process =&lt;br /&gt;
Issued a wide  call for volunteers for Area Chairs (ACs) and reviewers. &lt;br /&gt;
PCs created 25 specific areas + one for “Mixed Topics” and assigned at least 2 ACs per topic area. After abstract deadline we added more ACs to teams with larger than predicted submissions &lt;br /&gt;
&lt;br /&gt;
Assignment to areas used the initial START assignments, followed by load-rebalancing and conflict resolution using keywords and manual inspection of the paper.  Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids + manual tweaking&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window? &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==	Recruiting area chairs (ACs) and reviewers ==&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
THERE IS ANOTHER GRAPHIC TO INSERT HERE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==	A large pool of reviewers ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers. All volunteers were scanned by PCs and assigned ACs/reviewer roles, and each area was seeded with a set of volunteer reviewers. Area Chairs then filled out the remainder of their respective committees. Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
==	Structured review form ==&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== 	Deciding on the reject-without-review papers ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
* Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
* Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
* “Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant we needed 5 parallel tracks to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consists of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers are integrated into the program, and are marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
INSERT CHART&lt;br /&gt;
&lt;br /&gt;
To be added : X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73114</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73114"/>
		<updated>2019-07-19T18:18:21Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Recruiting area chairs (ACs) and reviewers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
= Area Chairs =&lt;br /&gt;
== FORMATTING TBD ==&lt;br /&gt;
Biomedical NLP &amp;amp; Clinical Text Processing&lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
Cognitive Modeling – Psycholinguistics&lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&lt;br /&gt;
Zornitsa Kozareva, Google, USA&lt;br /&gt;
Sujith Ravi, Google, USA&lt;br /&gt;
Michael White, Ohio State University, USA&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
Generation&lt;br /&gt;
He He, Amazon Web Services, USA&lt;br /&gt;
	Wei Xu, Ohio State University, USA&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
Information Extraction&lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
	David McClosky, Google, USA&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
Information Retrieval&lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
Machine Translation&lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&lt;br /&gt;
	Daniel Cer, Google Research, USA&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
Mixed Topics&lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
Multilingualism, Cross lingual resources&lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
NLP Applications&lt;br /&gt;
T. J. Hazen, Microsoft, USA&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
Phonology, Morphology and Word Segmentation&lt;br /&gt;
Ramy Eskander, Columbia University, USA&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
Question Answering&lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&lt;br /&gt;
Yansong Feng, Peking University, China&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
Semantics&lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&lt;br /&gt;
Samuel Bowman, New York University, USA&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
Social Media&lt;br /&gt;
Dan Goldwasser, Purdue University, USA&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
Speech&lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
Style&lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
Summarization&lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&lt;br /&gt;
Fei Liu, University of Central Florida, USA&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
Tagging, Chunking, Syntax and Parsing&lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&lt;br /&gt;
Agata Savary, University of Tours, France&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
Text Mining&lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&lt;br /&gt;
Anna Feldman, Montclair State University, USA&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&lt;br /&gt;
Kevin Small, Amazon, USA&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
Vision, Robotics and other grounding&lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* Diversity &amp;amp; Inclusion Organization&lt;br /&gt;
The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
&amp;lt;bunch of others, pull from their report&amp;gt;?&lt;br /&gt;
&lt;br /&gt;
* 	Submissions &lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
** Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
** Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
== An overview of statistics ==&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Review Process =&lt;br /&gt;
Issued a wide call for volunteers for Area Chairs (ACs) and reviewers. Volunteers were scanned by PCs and assigned ACs/reviewer roles.&lt;br /&gt;
PCs created 25 specific areas + one for “Mixed Topics” and assigned at least 2 ACs per topic area. After abstract deadline we added more ACs to teams with larger than predicted submissions &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors.&lt;br /&gt;
&lt;br /&gt;
Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window? &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==	Recruiting area chairs (ACs) and reviewers ==&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by gender for ACs and reviewers ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Response&lt;br /&gt;
! Area Chair&lt;br /&gt;
! Reviewer&lt;br /&gt;
|- &lt;br /&gt;
| Female&lt;br /&gt;
| 24.4&lt;br /&gt;
| 25.2&lt;br /&gt;
|- &lt;br /&gt;
| Male&lt;br /&gt;
| 73&lt;br /&gt;
| 71.7&lt;br /&gt;
|- &lt;br /&gt;
| Prefer not to answer&lt;br /&gt;
| 2.6&lt;br /&gt;
| 3.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Breakdown by employment category and country for ACs and reviewers ===&lt;br /&gt;
&lt;br /&gt;
THERE IS ANOTHER GRAPHIC TO INSERT HERE&lt;br /&gt;
&lt;br /&gt;
==	Assigning papers to areas and reviewers ==&lt;br /&gt;
Assignment to areas was based on keywords and manual inspection of the paper. Assignment of papers to reviewers followed a combination of TPMS, reviewer bidding, and manual tweaking. &lt;br /&gt;
&lt;br /&gt;
== 	Deciding on the reject-without-review papers ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
“Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
==	A large pool of reviewers ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers--we seeded the areas with volunteers who responded, and then Area Chairs filled out the remainder of their respective committees. Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
==	Structured review form ==&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant we needed 5 parallel tracks to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consists of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers are integrated into the program, and are marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
To be added : X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73113</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73113"/>
		<updated>2019-07-19T18:09:26Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* Review Process */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
= Area Chairs =&lt;br /&gt;
== FORMATTING TBD ==&lt;br /&gt;
Biomedical NLP &amp;amp; Clinical Text Processing&lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
Cognitive Modeling – Psycholinguistics&lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&lt;br /&gt;
Zornitsa Kozareva, Google, USA&lt;br /&gt;
Sujith Ravi, Google, USA&lt;br /&gt;
Michael White, Ohio State University, USA&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
Generation&lt;br /&gt;
He He, Amazon Web Services, USA&lt;br /&gt;
	Wei Xu, Ohio State University, USA&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
Information Extraction&lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
	David McClosky, Google, USA&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
Information Retrieval&lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
Machine Translation&lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&lt;br /&gt;
	Daniel Cer, Google Research, USA&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
Mixed Topics&lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
Multilingualism, Cross lingual resources&lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
NLP Applications&lt;br /&gt;
T. J. Hazen, Microsoft, USA&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
Phonology, Morphology and Word Segmentation&lt;br /&gt;
Ramy Eskander, Columbia University, USA&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
Question Answering&lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&lt;br /&gt;
Yansong Feng, Peking University, China&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
Semantics&lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&lt;br /&gt;
Samuel Bowman, New York University, USA&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
Social Media&lt;br /&gt;
Dan Goldwasser, Purdue University, USA&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
Speech&lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
Style&lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
Summarization&lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&lt;br /&gt;
Fei Liu, University of Central Florida, USA&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
Tagging, Chunking, Syntax and Parsing&lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&lt;br /&gt;
Agata Savary, University of Tours, France&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
Text Mining&lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&lt;br /&gt;
Anna Feldman, Montclair State University, USA&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&lt;br /&gt;
Kevin Small, Amazon, USA&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
Vision, Robotics and other grounding&lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* Diversity &amp;amp; Inclusion Organization&lt;br /&gt;
The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
&amp;lt;bunch of others, pull from their report&amp;gt;?&lt;br /&gt;
&lt;br /&gt;
* 	Submissions &lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
** Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
** Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
== An overview of statistics ==&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Review Process =&lt;br /&gt;
Issued a wide call for volunteers for Area Chairs (ACs) and reviewers. Volunteers were scanned by PCs and assigned ACs/reviewer roles.&lt;br /&gt;
PCs created 25 specific areas + one for “Mixed Topics” and assigned at least 2 ACs per topic area. After abstract deadline we added more ACs to teams with larger than predicted submissions &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors.&lt;br /&gt;
&lt;br /&gt;
Authors were blind to Area Chairs&lt;br /&gt;
&lt;br /&gt;
Review assignment &lt;br /&gt;
* Criteria: Fairness, Expertise, Interest&lt;br /&gt;
* Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids&lt;br /&gt;
* Many reviewers did not have TPMS profiles&lt;br /&gt;
&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more. &lt;br /&gt;
First-round accept/reject suggestions were made by area chairs. &lt;br /&gt;
Final decisions were made by the program chairs. &lt;br /&gt;
&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window? &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==	Recruiting area chairs (ACs) and reviewers ==&lt;br /&gt;
&lt;br /&gt;
Response&lt;br /&gt;
Area Chair&lt;br /&gt;
Reviewer&lt;br /&gt;
Female&lt;br /&gt;
24.4&lt;br /&gt;
25.2&lt;br /&gt;
Male&lt;br /&gt;
73&lt;br /&gt;
71.7&lt;br /&gt;
Prefer not to answer&lt;br /&gt;
2.6&lt;br /&gt;
3.1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==	Assigning papers to areas and reviewers ==&lt;br /&gt;
Assignment to areas was based on keywords and manual inspection of the paper. Assignment of papers to reviewers followed a combination of TPMS, reviewer bidding, and manual tweaking. &lt;br /&gt;
&lt;br /&gt;
== 	Deciding on the reject-without-review papers ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
“Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
==	A large pool of reviewers ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers--we seeded the areas with volunteers who responded, and then Area Chairs filled out the remainder of their respective committees. Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
==	Structured review form ==&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant we needed 5 parallel tracks to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consists of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers are integrated into the program, and are marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
To be added : X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73112</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73112"/>
		<updated>2019-07-19T18:06:38Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: /* An overview of statistics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
= Area Chairs =&lt;br /&gt;
== FORMATTING TBD ==&lt;br /&gt;
Biomedical NLP &amp;amp; Clinical Text Processing&lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
Cognitive Modeling – Psycholinguistics&lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&lt;br /&gt;
Zornitsa Kozareva, Google, USA&lt;br /&gt;
Sujith Ravi, Google, USA&lt;br /&gt;
Michael White, Ohio State University, USA&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
Generation&lt;br /&gt;
He He, Amazon Web Services, USA&lt;br /&gt;
	Wei Xu, Ohio State University, USA&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
Information Extraction&lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
	David McClosky, Google, USA&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
Information Retrieval&lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
Machine Translation&lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&lt;br /&gt;
	Daniel Cer, Google Research, USA&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
Mixed Topics&lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
Multilingualism, Cross lingual resources&lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
NLP Applications&lt;br /&gt;
T. J. Hazen, Microsoft, USA&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
Phonology, Morphology and Word Segmentation&lt;br /&gt;
Ramy Eskander, Columbia University, USA&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
Question Answering&lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&lt;br /&gt;
Yansong Feng, Peking University, China&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
Semantics&lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&lt;br /&gt;
Samuel Bowman, New York University, USA&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
Social Media&lt;br /&gt;
Dan Goldwasser, Purdue University, USA&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
Speech&lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
Style&lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
Summarization&lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&lt;br /&gt;
Fei Liu, University of Central Florida, USA&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
Tagging, Chunking, Syntax and Parsing&lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&lt;br /&gt;
Agata Savary, University of Tours, France&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
Text Mining&lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&lt;br /&gt;
Anna Feldman, Montclair State University, USA&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&lt;br /&gt;
Kevin Small, Amazon, USA&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
Vision, Robotics and other grounding&lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* Diversity &amp;amp; Inclusion Organization&lt;br /&gt;
The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
&amp;lt;bunch of others, pull from their report&amp;gt;?&lt;br /&gt;
&lt;br /&gt;
* 	Submissions &lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
** Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
** Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
== An overview of statistics ==&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
NEED TO CONVERT LATEX&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area ==&lt;br /&gt;
&lt;br /&gt;
NEED TO FORMAT TABLE&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks ==&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Review Process =&lt;br /&gt;
Issued a wide call for volunteers for Area Chairs (ACs) and reviewers. Volunteers were scanned by PCs and assigned ACs/reviewer roles.&lt;br /&gt;
PCs created 25 specific areas + one for “Mixed Topics” and assigned at least 2 ACs per topic area. After abstract deadline we added more ACs to teams with larger than predicted submissions &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors.&lt;br /&gt;
&lt;br /&gt;
Authors were blind to Area Chairs&lt;br /&gt;
Review assignment&lt;br /&gt;
Criteria: Fairness, Expertise, Interest&lt;br /&gt;
Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids&lt;br /&gt;
Many reviewers did not have TPMS profiles&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more&lt;br /&gt;
First-round accept/reject suggestions were made by area chairs&lt;br /&gt;
Final decisions were made by the program chairs&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window? &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==	Recruiting area chairs (ACs) and reviewers ==&lt;br /&gt;
&lt;br /&gt;
Response&lt;br /&gt;
Area Chair&lt;br /&gt;
Reviewer&lt;br /&gt;
Female&lt;br /&gt;
24.4&lt;br /&gt;
25.2&lt;br /&gt;
Male&lt;br /&gt;
73&lt;br /&gt;
71.7&lt;br /&gt;
Prefer not to answer&lt;br /&gt;
2.6&lt;br /&gt;
3.1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==	Assigning papers to areas and reviewers ==&lt;br /&gt;
Assignment to areas was based on keywords and manual inspection of the paper. Assignment of papers to reviewers followed a combination of TPMS, reviewer bidding, and manual tweaking. &lt;br /&gt;
&lt;br /&gt;
== 	Deciding on the reject-without-review papers ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
“Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
==	A large pool of reviewers ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers--we seeded the areas with volunteers who responded, and then Area Chairs filled out the remainder of their respective committees. Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
==	Structured review form ==&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant we needed 5 parallel tracks to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consists of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers are integrated into the program, and are marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
To be added : X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73105</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73105"/>
		<updated>2019-07-19T15:50:49Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
= Area Chairs =&lt;br /&gt;
== FORMATTING TBD ==&lt;br /&gt;
Biomedical NLP &amp;amp; Clinical Text Processing&lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
Cognitive Modeling – Psycholinguistics&lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&lt;br /&gt;
Zornitsa Kozareva, Google, USA&lt;br /&gt;
Sujith Ravi, Google, USA&lt;br /&gt;
Michael White, Ohio State University, USA&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
Generation&lt;br /&gt;
He He, Amazon Web Services, USA&lt;br /&gt;
	Wei Xu, Ohio State University, USA&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
Information Extraction&lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
	David McClosky, Google, USA&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
Information Retrieval&lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
Machine Translation&lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&lt;br /&gt;
	Daniel Cer, Google Research, USA&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
Mixed Topics&lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
Multilingualism, Cross lingual resources&lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
NLP Applications&lt;br /&gt;
T. J. Hazen, Microsoft, USA&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
Phonology, Morphology and Word Segmentation&lt;br /&gt;
Ramy Eskander, Columbia University, USA&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
Question Answering&lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&lt;br /&gt;
Yansong Feng, Peking University, China&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
Semantics&lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&lt;br /&gt;
Samuel Bowman, New York University, USA&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
Social Media&lt;br /&gt;
Dan Goldwasser, Purdue University, USA&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
Speech&lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
Style&lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
Summarization&lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&lt;br /&gt;
Fei Liu, University of Central Florida, USA&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
Tagging, Chunking, Syntax and Parsing&lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&lt;br /&gt;
Agata Savary, University of Tours, France&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
Text Mining&lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&lt;br /&gt;
Anna Feldman, Montclair State University, USA&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&lt;br /&gt;
Kevin Small, Amazon, USA&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
Vision, Robotics and other grounding&lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
*  Conference theme&lt;br /&gt;
The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
*  Land Acknowledgement&lt;br /&gt;
Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
* Video Poster Highlights&lt;br /&gt;
This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
* Remote Presentations&lt;br /&gt;
Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
* Diversity &amp;amp; Inclusion Organization&lt;br /&gt;
The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
** additional questions on the registration form to identify any accommodations&lt;br /&gt;
** preferred pronouns (optionally) added to badges&lt;br /&gt;
** I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
&amp;lt;bunch of others, pull from their report&amp;gt;?&lt;br /&gt;
&lt;br /&gt;
* 	Submissions &lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
** Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
** Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
* Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
== An overview of statistics ==&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
&lt;br /&gt;
== Detailed statistics by area&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==       Conference tracks&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Review Process =&lt;br /&gt;
Issued a wide call for volunteers for Area Chairs (ACs) and reviewers. Volunteers were scanned by PCs and assigned ACs/reviewer roles.&lt;br /&gt;
PCs created 25 specific areas + one for “Mixed Topics” and assigned at least 2 ACs per topic area. After abstract deadline we added more ACs to teams with larger than predicted submissions &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors.&lt;br /&gt;
&lt;br /&gt;
Authors were blind to Area Chairs&lt;br /&gt;
Review assignment&lt;br /&gt;
Criteria: Fairness, Expertise, Interest&lt;br /&gt;
Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids&lt;br /&gt;
Many reviewers did not have TPMS profiles&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more&lt;br /&gt;
First-round accept/reject suggestions were made by area chairs&lt;br /&gt;
Final decisions were made by the program chairs&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window? &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==	Recruiting area chairs (ACs) and reviewers ==&lt;br /&gt;
&lt;br /&gt;
Response&lt;br /&gt;
Area Chair&lt;br /&gt;
Reviewer&lt;br /&gt;
Female&lt;br /&gt;
24.4&lt;br /&gt;
25.2&lt;br /&gt;
Male&lt;br /&gt;
73&lt;br /&gt;
71.7&lt;br /&gt;
Prefer not to answer&lt;br /&gt;
2.6&lt;br /&gt;
3.1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==	Assigning papers to areas and reviewers ==&lt;br /&gt;
Assignment to areas was based on keywords and manual inspection of the paper. Assignment of papers to reviewers followed a combination of TPMS, reviewer bidding, and manual tweaking. &lt;br /&gt;
&lt;br /&gt;
== 	Deciding on the reject-without-review papers ==&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
“Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
==	A large pool of reviewers ==&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers--we seeded the areas with volunteers who responded, and then Area Chairs filled out the remainder of their respective committees. Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
==	Structured review form ==&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
==	Abstract Submissions ==&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
== Review process ==&lt;br /&gt;
&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant we needed 5 parallel tracks to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consists of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers are integrated into the program, and are marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
To be added : X reviews were received by the end of the review period, Y others within the next week.; Importance of double blind reviewing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73104</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73104"/>
		<updated>2019-07-19T15:43:03Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
= Area Chairs =&lt;br /&gt;
== FORMATTING TBD ==&lt;br /&gt;
Biomedical NLP &amp;amp; Clinical Text Processing&lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
Cognitive Modeling – Psycholinguistics&lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&lt;br /&gt;
Zornitsa Kozareva, Google, USA&lt;br /&gt;
Sujith Ravi, Google, USA&lt;br /&gt;
Michael White, Ohio State University, USA&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
Generation&lt;br /&gt;
He He, Amazon Web Services, USA&lt;br /&gt;
	Wei Xu, Ohio State University, USA&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
Information Extraction&lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
	David McClosky, Google, USA&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
Information Retrieval&lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
Machine Translation&lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&lt;br /&gt;
	Daniel Cer, Google Research, USA&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
Mixed Topics&lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
Multilingualism, Cross lingual resources&lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
NLP Applications&lt;br /&gt;
T. J. Hazen, Microsoft, USA&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
Phonology, Morphology and Word Segmentation&lt;br /&gt;
Ramy Eskander, Columbia University, USA&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
Question Answering&lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&lt;br /&gt;
Yansong Feng, Peking University, China&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
Semantics&lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&lt;br /&gt;
Samuel Bowman, New York University, USA&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
Social Media&lt;br /&gt;
Dan Goldwasser, Purdue University, USA&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
Speech&lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
Style&lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
Summarization&lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&lt;br /&gt;
Fei Liu, University of Central Florida, USA&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
Tagging, Chunking, Syntax and Parsing&lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&lt;br /&gt;
Agata Savary, University of Tours, France&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
Text Mining&lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&lt;br /&gt;
Anna Feldman, Montclair State University, USA&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&lt;br /&gt;
Kevin Small, Amazon, USA&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
Vision, Robotics and other grounding&lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
Conference theme&lt;br /&gt;
The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
Land Acknowledgement&lt;br /&gt;
Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
Video Poster Highlights&lt;br /&gt;
This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
Remote Presentations&lt;br /&gt;
Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
Diversity &amp;amp; Inclusion Organization&lt;br /&gt;
The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
additional questions on the registration form to identify any accommodations&lt;br /&gt;
preferred pronouns (optionally) added to badges&lt;br /&gt;
I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
&amp;lt;bunch of others, pull from their report&amp;gt;?&lt;br /&gt;
&lt;br /&gt;
=	Submissions =&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
3.1	An overview of statistics&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
3.2	Detailed statistics by area&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3.3       Conference tracks&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Review Process =&lt;br /&gt;
Issued a wide call for volunteers for Area Chairs (ACs) and reviewers. Volunteers were scanned by PCs and assigned ACs/reviewer roles.&lt;br /&gt;
PCs created 25 specific areas + one for “Mixed Topics” and assigned at least 2 ACs per topic area. After abstract deadline we added more ACs to teams with larger than predicted submissions &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors.&lt;br /&gt;
&lt;br /&gt;
Authors were blind to Area Chairs&lt;br /&gt;
Review assignment&lt;br /&gt;
Criteria: Fairness, Expertise, Interest&lt;br /&gt;
Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids&lt;br /&gt;
Many reviewers did not have TPMS profiles&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more&lt;br /&gt;
First-round accept/reject suggestions were made by area chairs&lt;br /&gt;
Final decisions were made by the program chairs&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window? &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4.1	Recruiting area chairs (ACs) and reviewers:&lt;br /&gt;
&lt;br /&gt;
Response&lt;br /&gt;
Area Chair&lt;br /&gt;
Reviewer&lt;br /&gt;
Female&lt;br /&gt;
24.4&lt;br /&gt;
25.2&lt;br /&gt;
Male&lt;br /&gt;
73&lt;br /&gt;
71.7&lt;br /&gt;
Prefer not to answer&lt;br /&gt;
2.6&lt;br /&gt;
3.1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4.2	Assigning papers to areas and reviewers:&lt;br /&gt;
Assignment to areas was based on keywords and manual inspection of the paper. Assignment of papers to reviewers followed a combination of TPMS, reviewer bidding, and manual tweaking. &lt;br /&gt;
&lt;br /&gt;
4.3	Deciding on the reject-without-review papers:&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
“Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
4.4	A large pool of reviewers&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers--we seeded the areas with volunteers who responded, and then Area Chairs filled out the remainder of their respective committees. Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
4.5	Structured review form&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
4.6	Abstract Submissions&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
4.7 Review process&lt;br /&gt;
&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant we needed 5 parallel tracks to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consists of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers are integrated into the program, and are marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
X reviews were received by the end of the review period, Y others within the next week.&lt;br /&gt;
&lt;br /&gt;
Importance of double blind reviewing&lt;br /&gt;
&lt;br /&gt;
4.9	Statistics&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
*  Best Thematic Paper: &lt;br /&gt;
: &#039;&#039;What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
*  Best Explainable NLP Paper:&lt;br /&gt;
: &#039;&#039;CNM: An Interpretable Complex-valued Network for Matching&#039;&#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
: Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
* Best Long Paper &lt;br /&gt;
: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding &amp;lt;br /&amp;gt;&lt;br /&gt;
: Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
*  Best Short Paper &lt;br /&gt;
: Probing the Need for Visual Context in Multimodal Machine Translation &amp;lt;br /&amp;gt;&lt;br /&gt;
: Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
* Best Resource Paper &lt;br /&gt;
: CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge &amp;lt;br /&amp;gt;&lt;br /&gt;
: Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
* Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
* Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
* Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
* Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73103</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73103"/>
		<updated>2019-07-19T15:35:57Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;br /&gt;
&lt;br /&gt;
= Area Chairs =&lt;br /&gt;
== FORMATTING TBD ==&lt;br /&gt;
Biomedical NLP &amp;amp; Clinical Text Processing&lt;br /&gt;
Bridget McInnes, Virginia Commonwealth University, USA&lt;br /&gt;
Byron C. Wallace, Northeastern University, USA&lt;br /&gt;
Cognitive Modeling – Psycholinguistics&lt;br /&gt;
Serguei Pakhomov, University of Minnesota, USA&lt;br /&gt;
Emily Prud’hommeaux, Boston College, USA&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
Nobuhiro Kaji, Yahoo Japan Corporation, Japan&lt;br /&gt;
Zornitsa Kozareva, Google, USA&lt;br /&gt;
Sujith Ravi, Google, USA&lt;br /&gt;
Michael White, Ohio State University, USA&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
Ruihong Huang, Texas A&amp;amp;M University, USA&lt;br /&gt;
Vincent Ng, University of Texas at Dallas, USA&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
Saif Mohammad, National Research Council Canada, Canada&lt;br /&gt;
Mark Yatskar, University of Washington, USA&lt;br /&gt;
Generation&lt;br /&gt;
He He, Amazon Web Services, USA&lt;br /&gt;
	Wei Xu, Ohio State University, USA&lt;br /&gt;
	Yue Zhang, Westlake University, China&lt;br /&gt;
Information Extraction&lt;br /&gt;
Heng Ji, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
	David McClosky, Google, USA&lt;br /&gt;
	Gerard de Melo, Rutgers University, USA&lt;br /&gt;
	Timothy Miller, Boston Children’s Hospital, USA&lt;br /&gt;
	Mo Yu, IBM Research, USA&lt;br /&gt;
Information Retrieval&lt;br /&gt;
Sumit Bhatia, IBM’s India Research Laboratory, India&lt;br /&gt;
	Dina Demner-Fushman, US National Library of Medicine, USA&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
Ryan Cotterell, Johns Hopkins University, USA&lt;br /&gt;
	Daichi Mochihashi, The Institute of Statistical Mathematics, Japan&lt;br /&gt;
	Marie-Francine Moens, KU Leuven, Belgium&lt;br /&gt;
	Vikram Ramanarayanan, Educational Testing Service, USA&lt;br /&gt;
	Anna Rumshisky, University of Massachusetts Lowell, USA&lt;br /&gt;
	Natalie Schluter, IT University of Copenhagen, Denmark&lt;br /&gt;
Machine Translation&lt;br /&gt;
Rafael E. Banchs, HLT Institute for Infocomm Research A*Star, Singapore&lt;br /&gt;
	Daniel Cer, Google Research, USA&lt;br /&gt;
	Haitao Mi, Ant Financial US, USA&lt;br /&gt;
	Preslav Nakov, Qatar Computing Research Institute, Qatar&lt;br /&gt;
	Zhaopeng Tu, Tencent, China&lt;br /&gt;
Mixed Topics&lt;br /&gt;
Ion Androutsopoulos, Athens Univ. of Economics and Business, Greece&lt;br /&gt;
	Steven Bethard, University of Arizona, USA&lt;br /&gt;
Multilingualism, Cross lingual resources&lt;br /&gt;
Željko Agić, IT University of Copenhagen, Denmark&lt;br /&gt;
	Ekaterina Shutova, University of Amsterdam, Netherlands&lt;br /&gt;
	Yulia Tsvetkov, Carnegie Mellon University, USA&lt;br /&gt;
	Ivan Vulic, Cambridge University, UK&lt;br /&gt;
NLP Applications&lt;br /&gt;
T. J. Hazen, Microsoft, USA&lt;br /&gt;
	Alessandro Moschitti, Amazon, USA&lt;br /&gt;
	Shimei Pan, University of Maryland Baltimore County, USA&lt;br /&gt;
	Wenpeng Yin, University of Pennsylvania, USA&lt;br /&gt;
	Su-Youn Yoon, Educational Testing Service, USA&lt;br /&gt;
Phonology, Morphology and Word Segmentation&lt;br /&gt;
Ramy Eskander, Columbia University, USA&lt;br /&gt;
Grzegorz Kondrak, University of Alberta, Canada&lt;br /&gt;
Question Answering&lt;br /&gt;
Eduardo Blanco, University of North Texas, USA&lt;br /&gt;
Christos Christodoulopoulos, Amazon, USA&lt;br /&gt;
Asif Ekbal, Indian Institute of Technology Patna, India&lt;br /&gt;
Yansong Feng, Peking University, China&lt;br /&gt;
Tim Rocktäschel, Facebook, USA&lt;br /&gt;
Avi Sil, IBM Research, USA&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
Torsten Zesch, University of Duisburg-Essen, Germany&lt;br /&gt;
Tristan Miller, Technische Universität Darmstadt, Germany&lt;br /&gt;
Semantics&lt;br /&gt;
Ebrahim Bagheri, Ryerson University, Canada&lt;br /&gt;
Samuel Bowman, New York University, USA&lt;br /&gt;
Matt Gardner, Allen Institute for Artificial Intelligence, USA&lt;br /&gt;
Kevin Gimpel, Toyota Technological Institute at Chicago, USA&lt;br /&gt;
Daisuke Kawahara, Kyoto University, Japan&lt;br /&gt;
Carlos Ramisch, Aix Marseille University, France&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
Isabelle Augenstein, University of Copenhagen, Denmark&lt;br /&gt;
Wai Lam, The Chinese University of Hong Kong, Hong Kong&lt;br /&gt;
Soujanya Poria, Nanyang Technological University, Singapore&lt;br /&gt;
Ivan Vladimir Meza Ruiz, UNAM, Mexico&lt;br /&gt;
Social Media&lt;br /&gt;
Dan Goldwasser, Purdue University, USA&lt;br /&gt;
Michael J. Paul, University of Colorado Boulder, USA&lt;br /&gt;
Sara Rosenthal, IBM Research, USA&lt;br /&gt;
Paolo Rosso, Universitat Politècnica de València, Spain&lt;br /&gt;
Chenhao Tan, University of Colorado Boulder, USA&lt;br /&gt;
Xiaodan Zhu, Queen’s University, Canada&lt;br /&gt;
Speech&lt;br /&gt;
Keelan Evanini, Educational Testing Service, USA&lt;br /&gt;
Yang Liu, LAIX Inc, USA&lt;br /&gt;
Style&lt;br /&gt;
Beata Beigman Klebanov, Educational Testing Service, USA&lt;br /&gt;
Manuel Montes, Instituto Nacional de Astrofísica, Óptica y Electrónica, Mexico&lt;br /&gt;
Joel Tetreault, Grammarly, USA&lt;br /&gt;
Summarization&lt;br /&gt;
Mohit Bansal, University of North Carolina Chapel Hill, USA&lt;br /&gt;
Fei Liu, University of Central Florida, USA&lt;br /&gt;
Ani Nenkova, University of Pennsylvania, USA&lt;br /&gt;
Tagging, Chunking, Syntax and Parsing&lt;br /&gt;
Adam Lopez, University of Edinburgh, Scotland&lt;br /&gt;
Roi Reichart, Technion – Israel Institute of Technology, Israel&lt;br /&gt;
Agata Savary, University of Tours, France&lt;br /&gt;
Guillaume Wisniewski, Université Paris Sud, France&lt;br /&gt;
Text Mining&lt;br /&gt;
Kai-Wei Chang, University of California Los Angeles, USA&lt;br /&gt;
Anna Feldman, Montclair State University, USA&lt;br /&gt;
Shervin Malmasi, Harvard Medical School, USA&lt;br /&gt;
Verónica Pérez-Rosas, University of Michigan, USA&lt;br /&gt;
Kevin Small, Amazon, USA&lt;br /&gt;
Diyi Yang, Carnegie Mellon University, USA&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
Valia Kordoni, Humboldt University Berlin, Germany&lt;br /&gt;
Andreas Maletti, University of Stuttgart, Germany&lt;br /&gt;
Vision, Robotics and other grounding&lt;br /&gt;
Francis Ferraro, University of Maryland Baltimore County, USA&lt;br /&gt;
Vicente Ordóñez, University of Virginia, USA&lt;br /&gt;
William Yang Wang, University of California Santa Barbara, USA&lt;br /&gt;
&lt;br /&gt;
= 	Main Innovations =&lt;br /&gt;
Conference theme&lt;br /&gt;
The CFP made a special request for papers addressing the tension between data privacy and model bias in NLP, including: using NLP for surveillance and profiling, balancing the need for broadly representative data sets with protections for individuals, understanding and addressing model bias, and where bias correction becomes censorship. The three invited speakers were all selected to tie into the theme, and a Best Thematic Paper was selected.&lt;br /&gt;
&lt;br /&gt;
Land Acknowledgement&lt;br /&gt;
Similar to what has been done in recent *CL conferences, the opening session included a land acknowledgement to recognize and honor Indigeneous Peoples.&lt;br /&gt;
&lt;br /&gt;
Video Poster Highlights&lt;br /&gt;
This year included one minute slides with pre recorded audio that showcase the posters to be presented that day. The goal was to provide more visibility to posters. These were shown during the welcome reception, breakfast and breaks.&lt;br /&gt;
&lt;br /&gt;
Remote Presentations&lt;br /&gt;
Remote presentations were supported for both talks and posters, via an application form to the committee. &lt;br /&gt;
&lt;br /&gt;
Diversity &amp;amp; Inclusion Organization&lt;br /&gt;
The new Diversity &amp;amp; Inclusion team piloted a number of new initiatives including:&lt;br /&gt;
additional questions on the registration form to identify any accommodations&lt;br /&gt;
preferred pronouns (optionally) added to badges&lt;br /&gt;
I’m hiring/I’m looking for a job/I’m new badge stickers&lt;br /&gt;
&amp;lt;bunch of others, pull from their report&amp;gt;?&lt;br /&gt;
&lt;br /&gt;
=	Submissions =&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. &lt;br /&gt;
Pro: early response to areas with larger than predicted number of papers&lt;br /&gt;
Con: too much overhead for PCs, as authors repeatedly contacted chairs to request that papers be moved between long and short, or asked about changes to authorship, titles and abstracts.&lt;br /&gt;
&lt;br /&gt;
Full papers available for bidding: reviewers loved it, authors did not&lt;br /&gt;
&lt;br /&gt;
3.1	An overview of statistics&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant 5 parallel tracks were needed to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consisted of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers were integrated into the program, and marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
Acceptance break-down:&lt;br /&gt;
\begin{table}[h]&lt;br /&gt;
\centering&lt;br /&gt;
\begin{tabular}{|l|l|l|l|l|}&lt;br /&gt;
\hline&lt;br /&gt;
 &amp;amp;\textbf{Long}&amp;amp; \textbf{Short} &amp;amp;\textbf{Total} &amp;amp; \textbf{TACL}\\ \hline&lt;br /&gt;
Reviewed &amp;amp; 1067 &amp;amp; 666 &amp;amp; 1733 &amp;amp; \\&lt;br /&gt;
Accepted as talk &amp;amp; 140  &amp;amp; 72  &amp;amp;  212 &amp;amp; 4\\&lt;br /&gt;
Accepted as poster &amp;amp;  141 &amp;amp; 70  &amp;amp;  211 &amp;amp; 5\\&lt;br /&gt;
Total Accepted &amp;amp; 281 (26.3\%)  &amp;amp; 142 (21.3\%) &amp;amp; 423 (24.4\%) &amp;amp; 9\\&lt;br /&gt;
\hline&lt;br /&gt;
\end{tabular}&lt;br /&gt;
\end{table}&lt;br /&gt;
3.2	Detailed statistics by area&lt;br /&gt;
&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Area&lt;br /&gt;
Long (%)&lt;br /&gt;
Short (%)&lt;br /&gt;
Bio and clinical NLP&lt;br /&gt;
7 (57)&lt;br /&gt;
28 (17)&lt;br /&gt;
Question Answering&lt;br /&gt;
73 (36)&lt;br /&gt;
41 (17)&lt;br /&gt;
Cognitive modeling&lt;br /&gt;
24 (29)&lt;br /&gt;
14 (14)&lt;br /&gt;
Resources and Evaluation&lt;br /&gt;
33 (27)&lt;br /&gt;
20 (20)&lt;br /&gt;
Dialog and Interactive systems&lt;br /&gt;
64 (20)&lt;br /&gt;
18 (27)&lt;br /&gt;
Semantics&lt;br /&gt;
80 (13)&lt;br /&gt;
42 (11)&lt;br /&gt;
Discourse and Pragmatics&lt;br /&gt;
38 (21) &lt;br /&gt;
       11 (36)&lt;br /&gt;
Sentiment Analysis&lt;br /&gt;
32 (28)&lt;br /&gt;
40 (20)&lt;br /&gt;
Ethics, Bias and Fairness&lt;br /&gt;
16 (25)&lt;br /&gt;
12 (50)&lt;br /&gt;
Social Media&lt;br /&gt;
44 (18)&lt;br /&gt;
41 (36)&lt;br /&gt;
Generation&lt;br /&gt;
46 (14)&lt;br /&gt;
19 (23)&lt;br /&gt;
Speech&lt;br /&gt;
19 (31)&lt;br /&gt;
9 (33)&lt;br /&gt;
Information Extraction&lt;br /&gt;
46 (28)&lt;br /&gt;
16 (12)&lt;br /&gt;
Style&lt;br /&gt;
24 ( (25)&lt;br /&gt;
16 (25)&lt;br /&gt;
Information Retrieval&lt;br /&gt;
22 (22)&lt;br /&gt;
13 (30)&lt;br /&gt;
Summarization&lt;br /&gt;
22 (27)&lt;br /&gt;
28 (28)&lt;br /&gt;
Machine Learning for NLP&lt;br /&gt;
100 (29)&lt;br /&gt;
22 (22)&lt;br /&gt;
Syntax&lt;br /&gt;
36 (52)&lt;br /&gt;
54 (13)&lt;br /&gt;
Machine Translation&lt;br /&gt;
49 (30)&lt;br /&gt;
53 (18)&lt;br /&gt;
Text Mining&lt;br /&gt;
101 (18)&lt;br /&gt;
29 (24)&lt;br /&gt;
Multilingual NLP&lt;br /&gt;
43 (25)&lt;br /&gt;
28 (10)&lt;br /&gt;
Theory and Formalisms&lt;br /&gt;
12 (58)&lt;br /&gt;
12 (16)&lt;br /&gt;
NLP Applications&lt;br /&gt;
60 (30)&lt;br /&gt;
41 (17)&lt;br /&gt;
Vision &amp;amp; Robotics&lt;br /&gt;
41 (12)&lt;br /&gt;
22 (36)&lt;br /&gt;
Phonology&lt;br /&gt;
24 (33)&lt;br /&gt;
       24 (25)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3.3       Conference tracks&lt;br /&gt;
The Industry Track, in its second year, had  28 accepted papers (10 oral and 18 posters, acceptance rate: ~28%), and ran a lunchtime Careers in Industry panel which was very well attended. Panelists were Judith Klavans, Yunyao Li, Owen Rambow, and Joel Tetreault and the moderator was Phil Resnik. &lt;br /&gt;
&lt;br /&gt;
The Student Research Workshop had 23 accepted papers, distributed throughout the conference, and 19 submissions received pre-submission mentoring. For the first time, both archival and non-archival submissions were offered, meaning that authors who opted for the non-archival version will not have a paper available in the archive and are free to publish elsewhere. &lt;br /&gt;
&lt;br /&gt;
There were 25 accepted Demos, which were spread across several of the poster sessions.&lt;br /&gt;
&lt;br /&gt;
=	Review Process =&lt;br /&gt;
Issued a wide call for volunteers for Area Chairs (ACs) and reviewers. Volunteers were scanned by PCs and assigned ACs/reviewer roles.&lt;br /&gt;
PCs created 25 specific areas + one for “Mixed Topics” and assigned at least 2 ACs per topic area. After abstract deadline we added more ACs to teams with larger than predicted submissions &lt;br /&gt;
&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors.&lt;br /&gt;
&lt;br /&gt;
Authors were blind to Area Chairs&lt;br /&gt;
Review assignment&lt;br /&gt;
Criteria: Fairness, Expertise, Interest&lt;br /&gt;
Method: area chair expertise + Toronto Paper Matching System (TPMS) + reviewer bids&lt;br /&gt;
Many reviewers did not have TPMS profiles&lt;br /&gt;
Goal was no more than 5 papers per reviewer, some reviewers agreed to handle more&lt;br /&gt;
First-round accept/reject suggestions were made by area chairs&lt;br /&gt;
Final decisions were made by the program chairs&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
No author response: due to time constraints and finding from NAACL 2018 that it had little impact. Authors were unhappy about this, they really want to be able to respond to reviews.&lt;br /&gt;
Video Poster highlights: instead of 1-minute madness, A/V failures have made it hard to assess effectiveness. &lt;br /&gt;
SRW papers integrated into sessions: positive feedback from participants, better experience for students&lt;br /&gt;
Did not repeat Test of Time awards from 2018--should this happen every N years to allow for sliding window? &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4.1	Recruiting area chairs (ACs) and reviewers:&lt;br /&gt;
&lt;br /&gt;
Response&lt;br /&gt;
Area Chair&lt;br /&gt;
Reviewer&lt;br /&gt;
Female&lt;br /&gt;
24.4&lt;br /&gt;
25.2&lt;br /&gt;
Male&lt;br /&gt;
73&lt;br /&gt;
71.7&lt;br /&gt;
Prefer not to answer&lt;br /&gt;
2.6&lt;br /&gt;
3.1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4.2	Assigning papers to areas and reviewers:&lt;br /&gt;
Assignment to areas was based on keywords and manual inspection of the paper. Assignment of papers to reviewers followed a combination of TPMS, reviewer bidding, and manual tweaking. &lt;br /&gt;
&lt;br /&gt;
4.3	Deciding on the reject-without-review papers:&lt;br /&gt;
Our process for identifying desk rejects has been very similar to what other PCs have done in the past. First, the area chairs check their batch of assigned papers and report any issues to us. As the reviewing begins, reviewers may also identify issues that were not caught by ACs, which they flag up to ACs or directly to PCs. We then review each of these issues and make a final decision, to ensure that papers are handled consistently. This means each paper is reviewed for non-content issues by at least three people.&lt;br /&gt;
The major categories for desk rejects are:&lt;br /&gt;
Violations to the dual submission policy specified in the call for papers&lt;br /&gt;
Violations to the anonymity policy as specified in the call for papers&lt;br /&gt;
“Format cheating” submissions not following the clearly stated format and style guidelines either in LaTeX or Word (thanks to Emily and Leon for introducing the concept).&lt;br /&gt;
As of February 7th, out of 2378 submissions, there were 44 rejections for format issues, 24 for anonymity violations, and 11 for dual submissions. This means that a total of 3% of the submissions were desk-rejected.&lt;br /&gt;
&lt;br /&gt;
4.4	A large pool of reviewers&lt;br /&gt;
Similar to what other PCs have done in the past, we distributed a wide call for volunteers to recruit the Area Chairs and Reviewers--we seeded the areas with volunteers who responded, and then Area Chairs filled out the remainder of their respective committees. Our goal was to ensure greater diversity by including in each area some participants who may not have been previously involved, and therefore would not have been invited if the committees were built from lists of previous reviewers.  390 of 1321 reviewers were reviewing for NAACL for the first time.&lt;br /&gt;
&lt;br /&gt;
4.5	Structured review form&lt;br /&gt;
We used a hybrid reviewing form, combining elements of the EMNLP 2018, NAACL-HLT 2018 and ACL 2018, with a 6-point overall rating scale so there was no “easy out” mid-point, distinct sections of summary, strengths and weaknesses to make easy to scan and compare relevant sections, and the minimum length feature of START enabled to elicit more consistently substantive content for the authors. This received excellent feedback from authors but which some reviewers complained about and others outright circumvented via html tags or repeated filler content. &lt;br /&gt;
&lt;br /&gt;
4.6	Abstract Submissions&lt;br /&gt;
&lt;br /&gt;
This year we followed a two-stage submission process, in which abstracts were due one week before full papers. Our goal was to get a head start on assigning papers to areas, and recruiting additional area chairs where submissions exceeded our predicted volume. Relative to the projected numbers from NAACL-HLT 2018, several areas received a higher-than-predicted number of submissions: Biomedical/Clinical, Dialogue and Vision. Text Mining ended up with the overall largest number of submissions.  &lt;br /&gt;
&lt;br /&gt;
4.7 Review process&lt;br /&gt;
&lt;br /&gt;
Authors were permitted to switch format (long/short) when they submitted the full papers, so the total in the chart below uses 2271 as the total number of submissions, discounting the 103 that never submitted a full paper in the second phase. Seventy nine papers were desk-rejected due to anonymity, formatting, or dual-submission violations;  456 papers withdrawn prior to acceptance decisions being sent, although some were withdrawn part way through the review process; and an additional 11 papers were withdrawn after acceptance notifications had been sent.  Keeping the acceptance rate consistent with past years meant we needed 5 parallel tracks to fit more papers into 3 days--as the conference grows, decisions will have to be made about continuing to add more tracks, adding more days to the main conference, or lowering the acceptance rate. The overall technical program consists of 423 main conference  papers, plus 9 TACL papers, 23 SRW papers, 28 Industry papers, and 24 demos. The TACL and SRW papers are integrated into the program, and are marked SRW or TACL accordingly.&lt;br /&gt;
&lt;br /&gt;
X reviews were received by the end of the review period, Y others within the next week.&lt;br /&gt;
&lt;br /&gt;
Importance of double blind reviewing&lt;br /&gt;
&lt;br /&gt;
4.9	Statistics&lt;br /&gt;
&lt;br /&gt;
=	Best paper awards =&lt;br /&gt;
&lt;br /&gt;
Best Thematic Paper&lt;br /&gt;
What’s in a Name? Reducing Bias in Bios Without Access to Protected Attributes&lt;br /&gt;
Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky and Adam Kalai&lt;br /&gt;
&lt;br /&gt;
Best Explainable NLP Paper&lt;br /&gt;
CNM: An Interpretable Complex-valued Network for Matching&lt;br /&gt;
Qiuchi Li, Benyou Wang and Massimo Melucci&lt;br /&gt;
&lt;br /&gt;
Best Long Paper&lt;br /&gt;
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding&lt;br /&gt;
Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova&lt;br /&gt;
&lt;br /&gt;
Best Short Paper&lt;br /&gt;
Probing the Need for Visual Context in Multimodal Machine Translation&lt;br /&gt;
Ozan Caglayan, Pranava Madhyastha, Lucia Specia and Loïc Barrault&lt;br /&gt;
&lt;br /&gt;
Best Resource Paper&lt;br /&gt;
CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge&lt;br /&gt;
Alon Talmor, Jonathan Herzig, Nicholas Lourie and Jonathan Berant&lt;br /&gt;
&lt;br /&gt;
=	Presentations=&lt;br /&gt;
Long-paper presentations: 22 sessions in total (4 sessions in parallel), duration: 15 minutes for talk + 3 minutes for questions + 2 dedicated Industry Track sessions&lt;br /&gt;
Short-paper presentations: 12 sessions in total (4 sessions in parallel), duration: 12 minutes for talk + 3 minutes for questions&lt;br /&gt;
Best-paper presentation: 1 session at the end of the last day&lt;br /&gt;
Posters: 8 sessions in total (1 session in parallel with every non-plenary talk session) + 1 dedicated Industry Poster session&lt;br /&gt;
&lt;br /&gt;
= 	Timeline =&lt;br /&gt;
&lt;br /&gt;
=	Issues and recommendations =&lt;br /&gt;
&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73082</id>
		<title>2019Q3 Reports: Program Chairs</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports:_Program_Chairs&amp;diff=73082"/>
		<updated>2019-07-16T19:26:10Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: Added organizing committee&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Program Committee =&lt;br /&gt;
&lt;br /&gt;
== Organising Committee == &lt;br /&gt;
&lt;br /&gt;
=== General Chair === &lt;br /&gt;
Jill Burstein, Educational Testing Service, USA &lt;br /&gt;
&lt;br /&gt;
=== Program Co-Chairs === &lt;br /&gt;
Christy Doran, Interactions LLC, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Thamar Solorio, University of Houston, USA&lt;br /&gt;
&lt;br /&gt;
=== Industry Track Co-chairs === &lt;br /&gt;
Rohit Kumar&amp;lt;br /&amp;gt;&lt;br /&gt;
Anastassia Loukina, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Michelle Morales, IBM, USA&lt;br /&gt;
&lt;br /&gt;
=== Workshop Co-Chairs === &lt;br /&gt;
Smaranda Muresan, Columbia University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Swapna Somasundaran, Educational Testing Service, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Elena Volodina, University of Gothenburg, Sweden&lt;br /&gt;
&lt;br /&gt;
=== Tutorial Co-Chairs === &lt;br /&gt;
Anoop Sarkar, Simon Fraser University, Canada&amp;lt;br /&amp;gt;&lt;br /&gt;
Michael Strube, Heidelberg Institute for Theoretical Studies, Germany&lt;br /&gt;
&lt;br /&gt;
=== System Demonstration Co-Chairs === &lt;br /&gt;
Waleed Ammar, Allen Institute for AI, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Annie Louis, University of Edinburgh, Scotland&amp;lt;br /&amp;gt;&lt;br /&gt;
Nasrin Mostafazadeh, Elemental Cognition, USA&lt;br /&gt;
&lt;br /&gt;
=== Publication Co-Chairs === &lt;br /&gt;
Stephanie Lukin, U.S. Army Research Laboratory&amp;lt;br /&amp;gt;&lt;br /&gt;
Alla Roskovskaya, City University of New York, USA&lt;br /&gt;
&lt;br /&gt;
=== Handbook Chair === &lt;br /&gt;
Steve DeNeefe, SDL, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Research Workshop Co-Chairs &amp;amp; Faculty Advisors === &lt;br /&gt;
Sudipta Kar, University of Houston, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Farah Nadeem, University of Washington, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Laura Wendlandt, University of Michigan, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Greg Durrett, University of Texas at Austin, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Na-Rae Han, University of Pittsburgh, USA&lt;br /&gt;
&lt;br /&gt;
=== Diversity &amp;amp; Inclusion Co-Chairs === &lt;br /&gt;
Jason Eisner, Johns Hopkins University, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Natalie Schluter, IT University, Copenhagen, Denmark&lt;br /&gt;
&lt;br /&gt;
=== Publicity &amp;amp; Social Media Co-Chairs === &lt;br /&gt;
Yuval Pinter, Georgia Institute of Technology, USA &amp;lt;br /&amp;gt;&lt;br /&gt;
Rachael Tatman, Kaggle, USA&lt;br /&gt;
&lt;br /&gt;
=== Website &amp;amp; Conference App Chair === &lt;br /&gt;
Nitin Madnani, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Student Volunteer Coordinator === &lt;br /&gt;
Lu Wang, Northeastern University, USA&lt;br /&gt;
&lt;br /&gt;
=== Video Chair === &lt;br /&gt;
Spencer Whitehead, Rensselaer Polytechnic Institute, USA&lt;br /&gt;
&lt;br /&gt;
=== Remote Presentation Co-Chairs === &lt;br /&gt;
Meg Mitchell, Google, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Abhinav Misra, Educational Testing Service, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Sponsorship Co-Chairs === &lt;br /&gt;
Chris Callison-Burch, University of Pennsylvania, USA&amp;lt;br /&amp;gt;&lt;br /&gt;
Tonya Custis, Thomson Reuters, USA&lt;br /&gt;
&lt;br /&gt;
=== Local Organization === &lt;br /&gt;
Priscilla Rasmussen, ACL&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports&amp;diff=72932</id>
		<title>2019Q3 Reports</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2019Q3_Reports&amp;diff=72932"/>
		<updated>2019-07-10T19:16:39Z</updated>

		<summary type="html">&lt;p&gt;ChristyDoran: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Reports from ACL Management&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[2019Q3 Reports: Office Manager]] (Priscilla Rasmussen)&lt;br /&gt;
* [[2019Q3 Reports: Secretary]] (Shiqi Zhao)&lt;br /&gt;
* [[2019Q3 Reports: Treasurer]] (David Yarowsky)&lt;br /&gt;
* [[2019Q3 Reports: NAACL]] (Julia Hockenmaier)&lt;br /&gt;
* [[2019Q3 Reports: EACL]] (Walter Daelemans)&lt;br /&gt;
* [[2019Q3 Reports: AACL]] (Haifeng Wang)&lt;br /&gt;
* [[2019Q3 Reports: SIG Officer]] (Jennifer Foster)&lt;br /&gt;
* [[2019Q3 Reports: Conference Officer]] (Barbara Di Eugenio)&lt;br /&gt;
* [[2019Q3 Reports: Information Officer]] (Nitin Madnani)&lt;br /&gt;
* [[2019Q3 Reports: Professional Conduct Committee]] (Graeme Hirst, Emily Bender)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NAACL 2019&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[2019Q3 Reports: Program Chairs]] (Christy Doran, Thamar Solorio)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;ACL 2019&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[2019Q3 Reports: General Chair]] (Lluís Màrquez)&lt;br /&gt;
* [[2019Q3 Reports: Program Chairs]] (Anna Korhonen, David Traum)&lt;br /&gt;
* [[2019Q3 Reports: Local Organizing Co-chairs]] (Alessandro Lenci, Bernardo Magnini, Simonetta Montemagni)&lt;br /&gt;
* [[2019Q3 Reports: Workshop Chairs]] (Barbara Plank, Sebastian Riedel)&lt;br /&gt;
* [[2019Q3 Reports: Tutorial Chairs]] (Preslav Nakov, Alexis Palmer)&lt;br /&gt;
* [[2019Q3 Reports: Publications Chairs]] (Douwe Kiela, Ivan Vulić, Shay Cohen, Kevin Gimpel)&lt;br /&gt;
* [[2019Q3 Reports: Demonstration Chairs]] (Enrique Alfonseca, Marta R. Costa-jussà)&lt;br /&gt;
* [[2019Q3 Reports: Student Research Workshop Chairs]] (Fernando Alva-Manchego, Eunsol Choi, Daniel Khashabi and SRW Faculty advisors Hannaneh Hajishirzi, Aurelie Herbelot, Scott Yih, Yue Zhang)&lt;br /&gt;
* [[2019Q3 Reports: Publicity Chairs]] (Felice Dell&#039;Orletta, Lucia Passaro, Sara Tonelli )&lt;br /&gt;
* [[2019Q3 Reports: Conference Handbook Chair]] (Elena Cabrio, Rachele Sprugnoli)&lt;br /&gt;
* [[2019Q3 Reports: Mentorship Co-Chairs]] (Rada Mihalcea, Robert Frederking)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Journals, Publications, and the Web&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[2019Q3 Reports: CL Journal Editor]] (Hwee Tou Ng)&lt;br /&gt;
* [[2019Q3 Reports: TACL Journal Editor]] (Lillian Lee, Mark Johnson, and Brian Roark)&lt;br /&gt;
* [[2019Q3 Reports: ACL Anthology]] (Matt Post)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recent Conferences&#039;&#039;&#039;&lt;br /&gt;
 &lt;br /&gt;
* [[2019Q3 Reports: EMNLP 2018]] (Ellen Riloff)&lt;br /&gt;
* [[2019Q3 Reports: NAACL 2019]] (Jill Burstein)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Future Conferences&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[2019Q3 Reports: EMNLP 2019]] (Kentaro Inui)&lt;br /&gt;
* [[2019Q3 Reports: ACL 2020]] (Ming Zhou)&lt;br /&gt;
* [[2019Q3 Reports: ACL 2021]] (Hinrich Schuetze)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SIG &amp;amp; BIG&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[2019Q3 Reports: SIGANN]] (Nancy Ide)&lt;br /&gt;
* [[2019Q3 Reports: SIGBioMed]] (Kevin Bretonnel Cohen)&lt;br /&gt;
* [[2019Q3 Reports: SIGDAT]] (Jian Su)&lt;br /&gt;
* [[2019Q3 Reports: SIGDIAL]] (Gabriel Skantze)&lt;br /&gt;
* [[2019Q3 Reports: SIGEDU]] (Jill Burstein)&lt;br /&gt;
* [[2019Q3 Reports: SIGFSM]] (Andreas Maletti)&lt;br /&gt;
* [[2019Q3 Reports: SIGGEN]] (Ehud Reiter)&lt;br /&gt;
* [[2019Q3 Reports: SIGHAN]] (Min Zhang)&lt;br /&gt;
* [[2019Q3 Reports: SIGHUM]] (Caroline Sporleder)&lt;br /&gt;
* [[2019Q3 Reports: SIGLEX]] (Anna Korhonen)&lt;br /&gt;
* [[2019Q3 Reports: SIGMOL]] (Philippe de Groote)&lt;br /&gt;
* [[2019Q3 Reports: SIGMORPHON]] (Jason Eisner)&lt;br /&gt;
* [[2019Q3 Reports: SIGMT]] (Philipp Koehn)&lt;br /&gt;
* [[2019Q3 Reports: SIGNLL]] (Julia Hockenmaier)&lt;br /&gt;
* [[2019Q3 Reports: SIGPARSE]] (Stephan Oepen)&lt;br /&gt;
* [[2019Q3 Reports: SIGRREP]] (Isabelle Augenstein)&lt;br /&gt;
* [[2019Q3 Reports: SIGSEM]] (Katrin Erk)&lt;br /&gt;
* [[2019Q3 Reports: SIGSLAV]] (Tomaž Erjavec)&lt;br /&gt;
* [[2019Q3 Reports: SIGSLPAT]] (Frank Rudzicz)&lt;br /&gt;
* [[2019Q3 Reports: SIGUR]] (Tommi A. Pirinen)&lt;br /&gt;
* [[2019Q3 Reports: SIGWAC]] (Roland Schafer)&lt;br /&gt;
* [[2019Q3 Reports: EquiCL]] (Marine Carpuat)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Exec Meeting&#039;&#039;&#039;&lt;br /&gt;
* [[2019Q3 Agenda]]&lt;br /&gt;
* [[2019Q3 Minutes (public version)]]&lt;/div&gt;</summary>
		<author><name>ChristyDoran</name></author>
	</entry>
</feed>