<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.aclweb.org/adminwiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=AmandaStent</id>
	<title>Admin Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.aclweb.org/adminwiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=AmandaStent"/>
	<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=Special:Contributions/AmandaStent"/>
	<updated>2026-04-21T20:07:36Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.6</generator>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75030</id>
		<title>2022Q1 Reports: ACL Rolling Review</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75030"/>
		<updated>2022-03-03T21:58:24Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== ARR: Looking Back == &lt;br /&gt;
&lt;br /&gt;
How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues. This committee consisted primarily of recent program chairs of big *ACL conferences, and was tasked with considering how to handle “the rapid growth of submissions”. The core problems were stated as following:&lt;br /&gt;
&lt;br /&gt;
* There are many submissions to major NLP conferences each year; the rate of increase at the time was exponential&lt;br /&gt;
* Low acceptance rates at major NLP conferences mean that we suspected that many submissions get resubmitted over and over&lt;br /&gt;
&lt;br /&gt;
In addition, arXiv increasingly interferes with the peer review process, with some authors complaining of bias in favor of arXiv preprints and authors complaining of the slowness of peer review and risks due to low acceptance rates at major NLP conferences.&lt;br /&gt;
&lt;br /&gt;
The committee came up with two sets of proposals: https://www.aclweb.org/portal/content/short-term-reform-proposals-acl-reviewing (June 2020) and https://www.aclweb.org/portal/content/long-term-reform-proposal-acl-reviewing (June 2020). &lt;br /&gt;
&lt;br /&gt;
To the core goal of ARR, cutting review load and increasing review consistency through R&amp;amp;R:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Venue &lt;br /&gt;
! 2020 &lt;br /&gt;
! 2021 &lt;br /&gt;
! 2022 &lt;br /&gt;
|-&lt;br /&gt;
| ACL &lt;br /&gt;
| 3429 &lt;br /&gt;
| 3350 &lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| NAACL &lt;br /&gt;
| - &lt;br /&gt;
| 1797 &lt;br /&gt;
| - &lt;br /&gt;
|-&lt;br /&gt;
| ARR &lt;br /&gt;
| - &lt;br /&gt;
| 3939 (through November) &lt;br /&gt;
| 2093 (incl December 2021) &lt;br /&gt;
|-&lt;br /&gt;
| ARR resubmissions &lt;br /&gt;
| - &lt;br /&gt;
| 221 (through November) &lt;br /&gt;
| 632 (December 2021/January 2022)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
853 submissions that would, without ARR, almost certainly have gone to new reviewers who would have had to write new reviews not knowing about the previous submissions/reviews, instead went to (in the majority of cases the same, but sometimes new) reviewers who had access to the previous reviews. It is important to stress that these numbers portray an incomplete picture of potential ARR gains, as ARR did not yet go through one complete year (that is, one full “conference season”). Once we reach the fall report, we will have a clearer picture regarding the overall reduction of the reviewing effort as we will know how many papers rejected from ACL/NAACL 2022 were re-committed to ACL-IJCNLP, EMNLP 2022 or another *ACL 2022 venue (*ACL workshops) without incurring new reviews.&lt;br /&gt;
&lt;br /&gt;
We do note that another of the short-term proposals, Findings, has been widely taken up, also contributing to reduced reviewing load as Findings papers are no longer eligible for submission to ARR nor *ACL venues. &lt;br /&gt;
&lt;br /&gt;
To the point about arXiv:&lt;br /&gt;
* ARR’s more frequent submission cycles allow authors to get feedback more quickly&lt;br /&gt;
* Almost 900 ARR submissions to-date opted in to be hosted as anonymous preprints&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Stats == &lt;br /&gt;
&lt;br /&gt;
===  Submissions and resubmissions ===&lt;br /&gt;
&lt;br /&gt;
Since the first deadline in May 2021, ARR has run 10 cycles and received a total of more than 6000 submissions including more than 800 resubmissions. Full statistics can be found on our [website](http://stats.aclrollingreview.org/).&lt;br /&gt;
&lt;br /&gt;
===  Authors and reviewers ===&lt;br /&gt;
&lt;br /&gt;
A total of 13801 unique OpenReview profiles have been associated with ARR to date, across authors, reviewers, action editors, tech team, and program chairs.&lt;br /&gt;
&lt;br /&gt;
==== Authors, author experience and authors as reviewers ====&lt;br /&gt;
&lt;br /&gt;
11787 unique author ids are associated with ARR submissions. The distribution of number of submissions across authors is shown in the chart below (5th percentile: 1; median: 1; mean: 1.95; 95th percentile: 5;  max: 46).&lt;br /&gt;
&lt;br /&gt;
[[File:Figure1.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for authors who have provided Semantic Scholar IDs (5th percentile: 1; median: 12; mean: 33; 95th percentile: 179). As expected, authors skew “junior”, with many having 0-5 publications - and of course, authors with no publications do not yet have Semantic Scholar profiles.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure2.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Reviewers, reviewer load and reviewer experience ====&lt;br /&gt;
&lt;br /&gt;
4391 unique reviewer ids are associated with ARR cycles to date. The distribution of number of reviews completed per reviewer who completed at least one review (including reviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 6; mean: 5.47; 95th percentile: 10;  max: 17). This means that the cumulative load per reviewer across 10 months is approximately 6 papers on average.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure3.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for reviewers who have provided Semantic Scholar IDs (5th percentile: 3; median: 20; mean: 40.26; 95th percentile: 170). Reviewers skew less junior than authors. When the ARR EiCs invite someone to review, either the person must have 5 publications with at least one in the past 5 years, or the person must be nominated by a senior action editor, action editor or EiC. However, the statistics here also include emergency reviewers, who may be more junior. &lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Action editors, action editor load  and action editor experience ====&lt;br /&gt;
&lt;br /&gt;
531 unique action editor ids are associated with ARR cycles to date. The distribution of number of metareviews completed per action editor who completed at least one metareview (including metareviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 10; mean: 8.67; 95th percentile: 14;  max: 20).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure5.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for action editors who have provided Semantic Scholar IDs (5th percentile: 5; median: 54; mean: 71.39; 95th percentile: 200). Action editors skew less junior than reviewers. Our initial pool of action editors was drawn from area chairs and senior area chairs from recent *ACL conferences plus workshop organizers in recent years, balancing as possible for diversity by geography and affiliation type (industry/academia).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure6.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Innovations ==&lt;br /&gt;
&lt;br /&gt;
With ARR (a single system and process for reviewing) we have been able to satisfy several of the short-term proposals for improving reviewing and roll out other initiatives, including:&lt;br /&gt;
* Anonymous preprints - hosting anonymous preprints for NLP papers through OpenReview. Almost 900 papers opted in to anonymous preprints through February 2022.&lt;br /&gt;
* Revise and resubmit - authors may revise and resubmit their papers, which we try to match to the previous reviewers (if available) unless the authors request new reviewers. Over 850 ARR submissions through January are resubmissions. See our stats dashboard for month-by-month breakdowns of resubmissions and requests for reviewer reassignment.&lt;br /&gt;
* Reviews as data (with Iryna Gurevych, Ilia Kuznetsov and Nils Dycke) - authors and reviewers may opt-in to provide reviews for research (see https://openreview.net/forum?id=28n-0nBPTch).&lt;br /&gt;
* Reviewer mentoring (with Isabelle Augenstein and Anna Rogers) - providing mentoring to junior reviewers and training to all reviewers (see https://aclrollingreview.org/reviewertutorial).&lt;br /&gt;
* Responsible NLP research (with the NAACL program and reproducibility chairs) - providing a responsible NLP research checklist for authors, and training about ethics and reproducibility in NLP research (see https://aclrollingreview.org/responsibleNLPresearch/). Since December, all ARR submissions have completed the checklist. We understand the NAACL 2022 reproducibility chairs are working on an analysis.&lt;br /&gt;
* Ethics review (with the ACL ethics committee) - providing guidelines for ethics reviewing (https://aclrollingreview.org/ethicsreviewertutorial).&lt;br /&gt;
* Review statistics - providing statistics on the *ACL review process over time (http://stats.aclrollingreview.org/).&lt;br /&gt;
* Reviewer recognition (coming to the February cycle!) - providing recognition letters to reviewers about their service (https://aclrollingreview.org/recognition/).&lt;br /&gt;
&lt;br /&gt;
== ARR: Looking ahead ==&lt;br /&gt;
&lt;br /&gt;
In 2022, ARR will be used by EMNLP and AACL-IJCNLP, INLG and SIGDIAL, and numerous *ACL workshops (https://aclrollingreview.org/dates).&lt;br /&gt;
&lt;br /&gt;
Future ARR initiatives will build on the current initiatives listed above. For example, we will soon roll out:&lt;br /&gt;
* Review quality assessment - we are starting an initiative where authors and AEs rate review quality. This data will feed into reviewer mentoring and reviewer recognition (identifying reviewers whose reviews are outstanding).&lt;br /&gt;
* Submissions-over-time statistics - what happens to NLP papers? For example, how often is a typical paper resubmitted before it is committed to a venue and accepted at a venue? &lt;br /&gt;
* Improved reviewer assignment - Graham Neubig continues to explore ways to improve reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
ARR has invited permanent Senior Action Editors, each responsible for a particular area of NLP. ARR is also inviting permanent ethics reviewers.&lt;br /&gt;
&lt;br /&gt;
The ACL exec will add more EiCs to ARR so that we can effectively distribute the load of managing each cycle as well as building out ethics reviewing, mentoring, and other ARR initiatives. The tech team will recruit a co-CTO.&lt;br /&gt;
&lt;br /&gt;
Finally, by agreement from the ACL exec, ARR will move to a six week cycle effective April 15 2022. This will give 9 cycles per year, allow all involved to finish one cycle before the next deadline, eliminate the exceptionally troublesome &amp;quot;December 15th&amp;quot; cycle, and give reviewers more breathing room. &lt;br /&gt;
&lt;br /&gt;
== Concerns and Challenges ==&lt;br /&gt;
&lt;br /&gt;
ARR is a paper-centric reviewing service, independent of any conference. It is not our job to provide acceptance/rejection decisions. Many authors feel unsure of this new process and understandably so. AEs and reviewers are also new to this rolling process and are concerned about their ongoing loads. Conference chairs are unsure of the new process and opt for hybrid mode instead of supporting ARR fully, again understandably so. However, overall this means we are constantly addressing concerns from multiple interests, some in conflict with each other (for example, authors who want immediate, in-depth reviews and reviewers who want low loads and lots of time to review). &lt;br /&gt;
&lt;br /&gt;
Moving to a new submission and review infrastructure (OpenReview) has proved challenging. The OpenReview team are very collegial and helpful, but the structure of ARR is unusual for OpenReview. It lacked many of the features and functionalities we needed to implement ARR, including some of the basic automations that are needed such as reminder emails,  automatic checking of resubmissions, etc. Consequently, our tech team has worked very hard to implement the various initiatives outlined above while we were rolling out ARR, including but not limited to AE assignment, reviewer assignment, etc. and to provide consistent conflict of interest detection and reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
In order for peer review to work, everyone must participate. Because the *ACL community skews young (see stats above), we must train junior reviewers and continue to involve more senior people in the reviewing process. To the extent that people think they can submit many papers, insist on high quality reviews, and yet provide no reviewing service themselves, any peer review system will fail. It is critical for experienced researchers to continue to participate in peer review. The chart below shows distribution of number of submissions by authors who are qualified to review for ARR (have at least 5 papers in relevant fields with the newest no more than 5 years old) and yet have provided 0 reviews to-date (x axis: number of people with y axis: number of submissions). &lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure7.PNG|250px]]&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75029</id>
		<title>2022Q1 Reports: ACL Rolling Review</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75029"/>
		<updated>2022-03-03T21:58:07Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== ARR: Looking Back == &lt;br /&gt;
&lt;br /&gt;
How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues. This committee consisted primarily of recent program chairs of big *ACL conferences, and was tasked with considering how to handle “the rapid growth of submissions”. The core problems were stated as following:&lt;br /&gt;
&lt;br /&gt;
* There are many submissions to major NLP conferences each year; the rate of increase at the time was exponential&lt;br /&gt;
* Low acceptance rates at major NLP conferences mean that we suspected that many submissions get resubmitted over and over&lt;br /&gt;
&lt;br /&gt;
In addition, arXiv increasingly interferes with the peer review process, with some authors complaining of bias in favor of arXiv preprints and authors complaining of the slowness of peer review and risks due to low acceptance rates at major NLP conferences.&lt;br /&gt;
&lt;br /&gt;
The committee came up with two sets of proposals: https://www.aclweb.org/portal/content/short-term-reform-proposals-acl-reviewing (June 2020) and https://www.aclweb.org/portal/content/long-term-reform-proposal-acl-reviewing (June 2020). &lt;br /&gt;
&lt;br /&gt;
To the core goal of ARR, cutting review load and increasing review consistency through R&amp;amp;R:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Venue &lt;br /&gt;
! 2020 &lt;br /&gt;
! 2021 &lt;br /&gt;
! 2022 &lt;br /&gt;
|-&lt;br /&gt;
| ACL &lt;br /&gt;
| 3429 &lt;br /&gt;
| 3350 &lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| NAACL &lt;br /&gt;
| - &lt;br /&gt;
| 1797 &lt;br /&gt;
| - &lt;br /&gt;
|-&lt;br /&gt;
| ARR &lt;br /&gt;
| - &lt;br /&gt;
| 3939 (through November) &lt;br /&gt;
| 2093 (incl December 2021) &lt;br /&gt;
|-&lt;br /&gt;
| ARR resubmissions &lt;br /&gt;
| - &lt;br /&gt;
| 221 (through November) &lt;br /&gt;
| 632 (December 2021/January 2022)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
853 submissions that would, without ARR, almost certainly have gone to new reviewers who would have had to write new reviews not knowing about the previous submissions/reviews, instead went to (in the majority of cases the same, but sometimes new) reviewers who had access to the previous reviews. It is important to stress that these numbers portray an incomplete picture of potential ARR gains, as ARR did not yet go through one complete year (that is, one full “conference season”). Once we reach the fall report, we will have a clearer picture regarding the overall reduction of the reviewing effort as we will know how many papers rejected from ACL/NAACL 2022 were re-committed to ACL-IJCNLP, EMNLP 2022 or another *ACL 2022 venue (*ACL workshops) without incurring new reviews.&lt;br /&gt;
&lt;br /&gt;
We do note that another of the short-term proposals, Findings, has been widely taken up, also contributing to reduced reviewing load as Findings papers are no longer eligible for submission to ARR nor *ACL venues. &lt;br /&gt;
&lt;br /&gt;
To the point about arXiv:&lt;br /&gt;
* ARR’s more frequent submission cycles allow authors to get feedback more quickly&lt;br /&gt;
* Almost 900 ARR submissions to-date opted in to be hosted as anonymous preprints&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Stats == &lt;br /&gt;
&lt;br /&gt;
===  Submissions and resubmissions ===&lt;br /&gt;
&lt;br /&gt;
Since the first deadline in May 2021, ARR has run 10 cycles and received a total of more than 6000 submissions including more than 800 resubmissions. Full statistics can be found on our [website](http://stats.aclrollingreview.org/).&lt;br /&gt;
&lt;br /&gt;
===  Authors and reviewers ===&lt;br /&gt;
&lt;br /&gt;
A total of 13801 unique OpenReview profiles have been associated with ARR to date, across authors, reviewers, action editors, tech team, and program chairs.&lt;br /&gt;
&lt;br /&gt;
==== Authors, author experience and authors as reviewers ====&lt;br /&gt;
&lt;br /&gt;
11787 unique author ids are associated with ARR submissions. The distribution of number of submissions across authors is shown in the chart below (5th percentile: 1; median: 1; mean: 1.95; 95th percentile: 5;  max: 46).&lt;br /&gt;
&lt;br /&gt;
[[File:Figure1.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for authors who have provided Semantic Scholar IDs (5th percentile: 1; median: 12; mean: 33; 95th percentile: 179). As expected, authors skew “junior”, with many having 0-5 publications - and of course, authors with no publications do not yet have Semantic Scholar profiles.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure2.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Reviewers, reviewer load and reviewer experience ====&lt;br /&gt;
&lt;br /&gt;
4391 unique reviewer ids are associated with ARR cycles to date. The distribution of number of reviews completed per reviewer who completed at least one review (including reviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 6; mean: 5.47; 95th percentile: 10;  max: 17). This means that the cumulative load per reviewer across 10 months is approximately 6 papers on average.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure3.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for reviewers who have provided Semantic Scholar IDs (5th percentile: 3; median: 20; mean: 40.26; 95th percentile: 170). Reviewers skew less junior than authors. When the ARR EiCs invite someone to review, either the person must have 5 publications with at least one in the past 5 years, or the person must be nominated by a senior action editor, action editor or EiC. However, the statistics here also include emergency reviewers, who may be more junior. &lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Action editors, action editor load  and action editor experience ====&lt;br /&gt;
&lt;br /&gt;
531 unique action editor ids are associated with ARR cycles to date. The distribution of number of metareviews completed per action editor who completed at least one metareview (including metareviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 10; mean: 8.67; 95th percentile: 14;  max: 20).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure5.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for action editors who have provided Semantic Scholar IDs (5th percentile: 5; median: 54; mean: 71.39; 95th percentile: 200). Action editors skew less junior than reviewers. Our initial pool of action editors was drawn from area chairs and senior area chairs from recent *ACL conferences plus workshop organizers in recent years, balancing as possible for diversity by geography and affiliation type (industry/academia).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure6.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Innovations ==&lt;br /&gt;
&lt;br /&gt;
With ARR (a single system and process for reviewing) we have been able to satisfy several of the short-term proposals for improving reviewing and roll out other initiatives, including:&lt;br /&gt;
* Anonymous preprints - hosting anonymous preprints for NLP papers through OpenReview. Almost 900 papers opted in to anonymous preprints through February 2022.&lt;br /&gt;
* Revise and resubmit - authors may revise and resubmit their papers, which we try to match to the previous reviewers (if available) unless the authors request new reviewers. Over 850 ARR submissions through January are resubmissions. See our stats dashboard for month-by-month breakdowns of resubmissions and requests for reviewer reassignment.&lt;br /&gt;
* Reviews as data (with Iryna Gurevych, Ilia Kuznetsov and Nils Dycke) - authors and reviewers may opt-in to provide reviews for research (see https://openreview.net/forum?id=28n-0nBPTch).&lt;br /&gt;
* Reviewer mentoring (with Isabelle Augenstein and Anna Rogers) - providing mentoring to junior reviewers and training to all reviewers (see https://aclrollingreview.org/reviewertutorial).&lt;br /&gt;
* Responsible NLP research (with the NAACL program and reproducibility chairs) - providing a responsible NLP research checklist for authors, and training about ethics and reproducibility in NLP research (see https://aclrollingreview.org/responsibleNLPresearch/). Since December, all ARR submissions have completed the checklist. We understand the NAACL 2022 reproducibility chairs are working on an analysis.&lt;br /&gt;
* Ethics review (with the ACL ethics committee) - providing guidelines for ethics reviewing (https://aclrollingreview.org/ethicsreviewertutorial).&lt;br /&gt;
* Review statistics - providing statistics on the *ACL review process over time (http://stats.aclrollingreview.org/).&lt;br /&gt;
* Reviewer recognition (coming to the February cycle!) - providing recognition letters to reviewers about their service (https://aclrollingreview.org/recognition/).&lt;br /&gt;
&lt;br /&gt;
== ARR: Looking ahead ==&lt;br /&gt;
&lt;br /&gt;
In 2022, ARR will be used by EMNLP and AACL-IJCNLP, INLG and SIGDIAL, and numerous *ACL workshops (https://aclrollingreview.org/dates).&lt;br /&gt;
&lt;br /&gt;
Future ARR initiatives will build on the current initiatives listed above. For example, we will soon roll out:&lt;br /&gt;
* Review quality assessment - we are starting an initiative where authors and AEs rate review quality. This data will feed into reviewer mentoring and reviewer recognition (identifying reviewers whose reviews are outstanding).&lt;br /&gt;
* Submissions-over-time statistics - what happens to NLP papers? For example, how often is a typical paper resubmitted before it is committed to a venue and accepted at a venue? &lt;br /&gt;
* Improved reviewer assignment - Graham Neubig continues to explore ways to improve reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
ARR has invited permanent Senior Action Editors, each responsible for a particular area of NLP. ARR is also inviting permanent ethics reviewers.&lt;br /&gt;
&lt;br /&gt;
The ACL exec will add more EiCs to ARR so that we can effectively distribute the load of managing each cycle as well as building out ethics reviewing, mentoring, and other ARR initiatives. The tech team will recruit a co-CTO.&lt;br /&gt;
&lt;br /&gt;
Finally, by agreement from the ACL exec, ARR will move to a six week cycle effective April 15 2022. This will give 9 cycles per year, allow all involved to finish one cycle before the next deadline, eliminate the exceptionally troublesome &amp;quot;December 15th&amp;quot; cycle, and give reviewers more breathing room. &lt;br /&gt;
&lt;br /&gt;
== Concerns and Challenges ==&lt;br /&gt;
&lt;br /&gt;
ARR is a paper-centric reviewing service, independent of any conference. It is not our job to provide acceptance/rejection decisions. Many authors feel unsure of this new process and understandably so. AEs and reviewers are also new to this rolling process and are concerned about their ongoing loads. Conference chairs are unsure of the new process and opt for hybrid mode instead of supporting ARR fully, again understandably so. However, overall this means we are constantly addressing concerns from multiple interests, some in conflict with each other (for example, authors who want immediate, in-depth reviews and reviewers who want low loads and lots of time to review). &lt;br /&gt;
&lt;br /&gt;
Moving to a new submission and review infrastructure (OpenReview) has proved challenging. The OpenReview team are very collegial and helpful, but the structure of ARR is unusual for OpenReview. It lacked many of the features and functionalities we needed to implement ARR, including some of the basic automations that are needed such as reminder emails,  automatic checking of resubmissions, etc. Consequently, our tech team has worked very hard to implement the various initiatives outlined above while we were rolling out ARR, including but not limited to AE assignment, reviewer assignment, etc. and to provide consistent conflict of interest detection and reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
In order for peer review to work, everyone must participate. Because the *ACL community skews young (see stats above), we must train junior reviewers and continue to involve more senior people in the reviewing process. To the extent that people think they can submit many papers, insist on high quality reviews, and yet provide no reviewing service themselves, any peer review system will fail. It is critical for experienced researchers to continue to participate in peer review. The chart below shows distribution of number of submissions by authors who are qualified to review for ARR (have at least 5 papers in relevant fields with the newest no more than 5 years old) and yet have provided 0 reviews to-date (x axis: number of people with y axis: number of submissions). &lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure7.PNG|400px]]&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=File:Arrq122figure7.PNG&amp;diff=75028</id>
		<title>File:Arrq122figure7.PNG</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=File:Arrq122figure7.PNG&amp;diff=75028"/>
		<updated>2022-03-03T21:56:55Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75027</id>
		<title>2022Q1 Reports: ACL Rolling Review</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75027"/>
		<updated>2022-03-03T21:56:12Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== ARR: Looking Back == &lt;br /&gt;
&lt;br /&gt;
How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues. This committee consisted primarily of recent program chairs of big *ACL conferences, and was tasked with considering how to handle “the rapid growth of submissions”. The core problems were stated as following:&lt;br /&gt;
&lt;br /&gt;
* There are many submissions to major NLP conferences each year; the rate of increase at the time was exponential&lt;br /&gt;
* Low acceptance rates at major NLP conferences mean that we suspected that many submissions get resubmitted over and over&lt;br /&gt;
&lt;br /&gt;
In addition, arXiv increasingly interferes with the peer review process, with some authors complaining of bias in favor of arXiv preprints and authors complaining of the slowness of peer review and risks due to low acceptance rates at major NLP conferences.&lt;br /&gt;
&lt;br /&gt;
The committee came up with two sets of proposals: https://www.aclweb.org/portal/content/short-term-reform-proposals-acl-reviewing (June 2020) and https://www.aclweb.org/portal/content/long-term-reform-proposal-acl-reviewing (June 2020). &lt;br /&gt;
&lt;br /&gt;
To the core goal of ARR, cutting review load and increasing review consistency through R&amp;amp;R:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Venue &lt;br /&gt;
! 2020 &lt;br /&gt;
! 2021 &lt;br /&gt;
! 2022 &lt;br /&gt;
|-&lt;br /&gt;
| ACL &lt;br /&gt;
| 3429 &lt;br /&gt;
| 3350 &lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| NAACL &lt;br /&gt;
| - &lt;br /&gt;
| 1797 &lt;br /&gt;
| - &lt;br /&gt;
|-&lt;br /&gt;
| ARR &lt;br /&gt;
| - &lt;br /&gt;
| 3939 (through November) &lt;br /&gt;
| 2093 (incl December 2021) &lt;br /&gt;
|-&lt;br /&gt;
| ARR resubmissions &lt;br /&gt;
| - &lt;br /&gt;
| 221 (through November) &lt;br /&gt;
| 632 (December 2021/January 2022)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
853 submissions that would, without ARR, almost certainly have gone to new reviewers who would have had to write new reviews not knowing about the previous submissions/reviews, instead went to (in the majority of cases the same, but sometimes new) reviewers who had access to the previous reviews. It is important to stress that these numbers portray an incomplete picture of potential ARR gains, as ARR did not yet go through one complete year (that is, one full “conference season”). Once we reach the fall report, we will have a clearer picture regarding the overall reduction of the reviewing effort as we will know how many papers rejected from ACL/NAACL 2022 were re-committed to ACL-IJCNLP, EMNLP 2022 or another *ACL 2022 venue (*ACL workshops) without incurring new reviews.&lt;br /&gt;
&lt;br /&gt;
We do note that another of the short-term proposals, Findings, has been widely taken up, also contributing to reduced reviewing load as Findings papers are no longer eligible for submission to ARR nor *ACL venues. &lt;br /&gt;
&lt;br /&gt;
To the point about arXiv:&lt;br /&gt;
* ARR’s more frequent submission cycles allow authors to get feedback more quickly&lt;br /&gt;
* Almost 900 ARR submissions to-date opted in to be hosted as anonymous preprints&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Stats == &lt;br /&gt;
&lt;br /&gt;
===  Submissions and resubmissions ===&lt;br /&gt;
&lt;br /&gt;
Since the first deadline in May 2021, ARR has run 10 cycles and received a total of more than 6000 submissions including more than 800 resubmissions. Full statistics can be found on our [website](http://stats.aclrollingreview.org/).&lt;br /&gt;
&lt;br /&gt;
===  Authors and reviewers ===&lt;br /&gt;
&lt;br /&gt;
A total of 13801 unique OpenReview profiles have been associated with ARR to date, across authors, reviewers, action editors, tech team, and program chairs.&lt;br /&gt;
&lt;br /&gt;
==== Authors, author experience and authors as reviewers ====&lt;br /&gt;
&lt;br /&gt;
11787 unique author ids are associated with ARR submissions. The distribution of number of submissions across authors is shown in the chart below (5th percentile: 1; median: 1; mean: 1.95; 95th percentile: 5;  max: 46).&lt;br /&gt;
&lt;br /&gt;
[[File:Figure1.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for authors who have provided Semantic Scholar IDs (5th percentile: 1; median: 12; mean: 33; 95th percentile: 179). As expected, authors skew “junior”, with many having 0-5 publications - and of course, authors with no publications do not yet have Semantic Scholar profiles.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure2.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Reviewers, reviewer load and reviewer experience ====&lt;br /&gt;
&lt;br /&gt;
4391 unique reviewer ids are associated with ARR cycles to date. The distribution of number of reviews completed per reviewer who completed at least one review (including reviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 6; mean: 5.47; 95th percentile: 10;  max: 17). This means that the cumulative load per reviewer across 10 months is approximately 6 papers on average.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure3.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for reviewers who have provided Semantic Scholar IDs (5th percentile: 3; median: 20; mean: 40.26; 95th percentile: 170). Reviewers skew less junior than authors. When the ARR EiCs invite someone to review, either the person must have 5 publications with at least one in the past 5 years, or the person must be nominated by a senior action editor, action editor or EiC. However, the statistics here also include emergency reviewers, who may be more junior. &lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Action editors, action editor load  and action editor experience ====&lt;br /&gt;
&lt;br /&gt;
531 unique action editor ids are associated with ARR cycles to date. The distribution of number of metareviews completed per action editor who completed at least one metareview (including metareviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 10; mean: 8.67; 95th percentile: 14;  max: 20).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure5.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for action editors who have provided Semantic Scholar IDs (5th percentile: 5; median: 54; mean: 71.39; 95th percentile: 200). Action editors skew less junior than reviewers. Our initial pool of action editors was drawn from area chairs and senior area chairs from recent *ACL conferences plus workshop organizers in recent years, balancing as possible for diversity by geography and affiliation type (industry/academia).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure6.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Innovations ==&lt;br /&gt;
&lt;br /&gt;
With ARR (a single system and process for reviewing) we have been able to satisfy several of the short-term proposals for improving reviewing and roll out other initiatives, including:&lt;br /&gt;
* Anonymous preprints - hosting anonymous preprints for NLP papers through OpenReview. Almost 900 papers opted in to anonymous preprints through February 2022.&lt;br /&gt;
* Revise and resubmit - authors may revise and resubmit their papers, which we try to match to the previous reviewers (if available) unless the authors request new reviewers. Over 850 ARR submissions through January are resubmissions. See our stats dashboard for month-by-month breakdowns of resubmissions and requests for reviewer reassignment.&lt;br /&gt;
* Reviews as data (with Iryna Gurevych, Ilia Kuznetsov and Nils Dycke) - authors and reviewers may opt-in to provide reviews for research (see https://openreview.net/forum?id=28n-0nBPTch).&lt;br /&gt;
* Reviewer mentoring (with Isabelle Augenstein and Anna Rogers) - providing mentoring to junior reviewers and training to all reviewers (see https://aclrollingreview.org/reviewertutorial).&lt;br /&gt;
* Responsible NLP research (with the NAACL program and reproducibility chairs) - providing a responsible NLP research checklist for authors, and training about ethics and reproducibility in NLP research (see https://aclrollingreview.org/responsibleNLPresearch/). Since December, all ARR submissions have completed the checklist. We understand the NAACL 2022 reproducibility chairs are working on an analysis.&lt;br /&gt;
* Ethics review (with the ACL ethics committee) - providing guidelines for ethics reviewing (https://aclrollingreview.org/ethicsreviewertutorial).&lt;br /&gt;
* Review statistics - providing statistics on the *ACL review process over time (http://stats.aclrollingreview.org/).&lt;br /&gt;
* Reviewer recognition (coming to the February cycle!) - providing recognition letters to reviewers about their service (https://aclrollingreview.org/recognition/).&lt;br /&gt;
&lt;br /&gt;
== ARR: Looking ahead ==&lt;br /&gt;
&lt;br /&gt;
In 2022, ARR will be used by EMNLP and AACL-IJCNLP, INLG and SIGDIAL, and numerous *ACL workshops (https://aclrollingreview.org/dates).&lt;br /&gt;
&lt;br /&gt;
Future ARR initiatives will build on the current initiatives listed above. For example, we will soon roll out:&lt;br /&gt;
* Review quality assessment - we are starting an initiative where authors and AEs rate review quality. This data will feed into reviewer mentoring and reviewer recognition (identifying reviewers whose reviews are outstanding).&lt;br /&gt;
* Submissions-over-time statistics - what happens to NLP papers? For example, how often is a typical paper resubmitted before it is committed to a venue and accepted at a venue? &lt;br /&gt;
* Improved reviewer assignment - Graham Neubig continues to explore ways to improve reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
ARR has invited permanent Senior Action Editors, each responsible for a particular area of NLP. ARR is also inviting permanent ethics reviewers.&lt;br /&gt;
&lt;br /&gt;
The ACL exec will add more EiCs to ARR so that we can effectively distribute the load of managing each cycle as well as building out ethics reviewing, mentoring, and other ARR initiatives. The tech team will recruit a co-CTO.&lt;br /&gt;
&lt;br /&gt;
Finally, by agreement from the ACL exec, ARR will move to a six week cycle effective April 15 2022. This will give 9 cycles per year, allow all involved to finish one cycle before the next deadline, eliminate the exceptionally troublesome &amp;quot;December 15th&amp;quot; cycle, and give reviewers more breathing room. &lt;br /&gt;
&lt;br /&gt;
== Concerns and Challenges ==&lt;br /&gt;
&lt;br /&gt;
ARR is a paper-centric reviewing service, independent of any conference. It is not our job to provide acceptance/rejection decisions. Many authors feel unsure of this new process and understandably so. AEs and reviewers are also new to this rolling process and are concerned about their ongoing loads. Conference chairs are unsure of the new process and opt for hybrid mode instead of supporting ARR fully, again understandably so. However, overall this means we are constantly addressing concerns from multiple interests, some in conflict with each other (for example, authors who want immediate, in-depth reviews and reviewers who want low loads and lots of time to review). &lt;br /&gt;
&lt;br /&gt;
Moving to a new submission and review infrastructure (OpenReview) has proved challenging. The OpenReview team are very collegial and helpful, but the structure of ARR is unusual for OpenReview. It lacked many of the features and functionalities we needed to implement ARR, including some of the basic automations that are needed such as reminder emails,  automatic checking of resubmissions, etc. Consequently, our tech team has worked very hard to implement the various initiatives outlined above while we were rolling out ARR, including but not limited to AE assignment, reviewer assignment, etc. and to provide consistent conflict of interest detection and reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
In order for peer review to work, everyone must participate. Because the *ACL community skews young (see stats above), we must train junior reviewers and continue to involve more senior people in the reviewing process. To the extent that people think they can submit many papers, insist on high quality reviews, and yet provide no reviewing service themselves, any peer review system will fail. It is critical for experienced researchers to continue to participate in peer review. The chart below shows distribution of number of submissions by authors who are qualified to review for ARR (have at least 5 papers in relevant fields with the newest no more than 5 years old) and yet have provided 0 reviews to-date.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75026</id>
		<title>2022Q1 Reports: ACL Rolling Review</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75026"/>
		<updated>2022-03-03T20:05:20Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== ARR: Looking Back == &lt;br /&gt;
&lt;br /&gt;
How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues. This committee consisted primarily of recent program chairs of big *ACL conferences, and was tasked with considering how to handle “the rapid growth of submissions”. The core problems were stated as following:&lt;br /&gt;
&lt;br /&gt;
* There are many submissions to major NLP conferences each year; the rate of increase at the time was exponential&lt;br /&gt;
* Low acceptance rates at major NLP conferences mean that we suspected that many submissions get resubmitted over and over&lt;br /&gt;
&lt;br /&gt;
In addition, arXiv increasingly interferes with the peer review process, with some authors complaining of bias in favor of arXiv preprints and authors complaining of the slowness of peer review and risks due to low acceptance rates at major NLP conferences.&lt;br /&gt;
&lt;br /&gt;
The committee came up with two sets of proposals: https://www.aclweb.org/portal/content/short-term-reform-proposals-acl-reviewing (June 2020) and https://www.aclweb.org/portal/content/long-term-reform-proposal-acl-reviewing (June 2020). &lt;br /&gt;
&lt;br /&gt;
To the core goal of ARR, cutting review load and increasing review consistency through R&amp;amp;R:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Venue &lt;br /&gt;
! 2020 &lt;br /&gt;
! 2021 &lt;br /&gt;
! 2022 &lt;br /&gt;
|-&lt;br /&gt;
| ACL &lt;br /&gt;
| 3429 &lt;br /&gt;
| 3350 &lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| NAACL &lt;br /&gt;
| - &lt;br /&gt;
| 1797 &lt;br /&gt;
| - &lt;br /&gt;
|-&lt;br /&gt;
| ARR &lt;br /&gt;
| - &lt;br /&gt;
| 3939 (through November) &lt;br /&gt;
| 2093 (incl December 2021) &lt;br /&gt;
|-&lt;br /&gt;
| ARR resubmissions &lt;br /&gt;
| - &lt;br /&gt;
| 221 (through November) &lt;br /&gt;
| 632 (December 2021/January 2022)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
853 submissions that would, without ARR, almost certainly have gone to new reviewers who would have had to write new reviews not knowing about the previous submissions/reviews, instead went to (in the majority of cases the same, but sometimes new) reviewers who had access to the previous reviews. It is important to stress that these numbers portray an incomplete picture of potential ARR gains, as ARR did not yet go through one complete year (that is, one full “conference season”). Once we reach the fall report, we will have a clearer picture regarding the overall reduction of the reviewing effort as we will know how many papers rejected from ACL/NAACL 2022 were re-committed to ACL-IJCNLP, EMNLP 2022 or another *ACL 2022 venue (*ACL workshops) without incurring new reviews.&lt;br /&gt;
&lt;br /&gt;
We do note that another of the short-term proposals, Findings, has been widely taken up, also contributing to reduced reviewing load as Findings papers are no longer eligible for submission to ARR nor *ACL venues. &lt;br /&gt;
&lt;br /&gt;
To the point about arXiv:&lt;br /&gt;
* ARR’s more frequent submission cycles allow authors to get feedback more quickly&lt;br /&gt;
* Almost 900 ARR submissions to-date opted in to be hosted as anonymous preprints&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Stats == &lt;br /&gt;
&lt;br /&gt;
===  Submissions and resubmissions ===&lt;br /&gt;
&lt;br /&gt;
Since the first deadline in May 2021, ARR has run 10 cycles and received a total of more than 6000 submissions including more than 800 resubmissions. Full statistics can be found on our [website](http://stats.aclrollingreview.org/).&lt;br /&gt;
&lt;br /&gt;
===  Authors and reviewers ===&lt;br /&gt;
&lt;br /&gt;
A total of 13801 unique OpenReview profiles have been associated with ARR to date, across authors, reviewers, action editors, tech team, and program chairs.&lt;br /&gt;
&lt;br /&gt;
==== Authors, author experience and authors as reviewers ====&lt;br /&gt;
&lt;br /&gt;
11787 unique author ids are associated with ARR submissions. The distribution of number of submissions across authors is shown in the chart below (5th percentile: 1; median: 1; mean: 1.95; 95th percentile: 5;  max: 46).&lt;br /&gt;
&lt;br /&gt;
[[File:Figure1.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for authors who have provided Semantic Scholar IDs (5th percentile: 1; median: 12; mean: 33; 95th percentile: 179). As expected, authors skew “junior”, with many having 0-5 publications - and of course, authors with no publications do not yet have Semantic Scholar profiles.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure2.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Reviewers, reviewer load and reviewer experience ====&lt;br /&gt;
&lt;br /&gt;
4391 unique reviewer ids are associated with ARR cycles to date. The distribution of number of reviews completed per reviewer who completed at least one review (including reviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 6; mean: 5.47; 95th percentile: 10;  max: 17). This means that the cumulative load per reviewer across 10 months is approximately 6 papers on average.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure3.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for reviewers who have provided Semantic Scholar IDs (5th percentile: 3; median: 20; mean: 40.26; 95th percentile: 170). Reviewers skew less junior than authors. When the ARR EiCs invite someone to review, either the person must have 5 publications with at least one in the past 5 years, or the person must be nominated by a senior action editor, action editor or EiC. However, the statistics here also include emergency reviewers, who may be more junior. &lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Action editors, action editor load  and action editor experience ====&lt;br /&gt;
&lt;br /&gt;
531 unique action editor ids are associated with ARR cycles to date. The distribution of number of metareviews completed per action editor who completed at least one metareview (including metareviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 10; mean: 8.67; 95th percentile: 14;  max: 20).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure5.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for action editors who have provided Semantic Scholar IDs (5th percentile: 5; median: 54; mean: 71.39; 95th percentile: 200). Action editors skew less junior than reviewers. Our initial pool of action editors was drawn from area chairs and senior area chairs from recent *ACL conferences plus workshop organizers in recent years, balancing as possible for diversity by geography and affiliation type (industry/academia).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure6.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Innovations ==&lt;br /&gt;
&lt;br /&gt;
With ARR (a single system and process for reviewing) we have been able to satisfy several of the short-term proposals for improving reviewing and roll out other initiatives, including:&lt;br /&gt;
* Anonymous preprints - hosting anonymous preprints for NLP papers through OpenReview. Almost 900 papers opted in to anonymous preprints through February 2022.&lt;br /&gt;
* Revise and resubmit - authors may revise and resubmit their papers, which we try to match to the previous reviewers (if available) unless the authors request new reviewers. Over 850 ARR submissions through January are resubmissions. See our stats dashboard for month-by-month breakdowns of resubmissions and requests for reviewer reassignment.&lt;br /&gt;
* Reviews as data (with Iryna Gurevych, Ilia Kuznetsov and Nils Dycke) - authors and reviewers may opt-in to provide reviews for research (see https://openreview.net/forum?id=28n-0nBPTch).&lt;br /&gt;
* Reviewer mentoring (with Isabelle Augenstein and Anna Rogers) - providing mentoring to junior reviewers and training to all reviewers (see https://aclrollingreview.org/reviewertutorial).&lt;br /&gt;
* Responsible NLP research (with the NAACL program and reproducibility chairs) - providing a responsible NLP research checklist for authors, and training about ethics and reproducibility in NLP research (see https://aclrollingreview.org/responsibleNLPresearch/). Since December, all ARR submissions have completed the checklist. We understand the NAACL 2022 reproducibility chairs are working on an analysis.&lt;br /&gt;
* Ethics review (with the ACL ethics committee) - providing guidelines for ethics reviewing (https://aclrollingreview.org/ethicsreviewertutorial).&lt;br /&gt;
* Review statistics - providing statistics on the *ACL review process over time (http://stats.aclrollingreview.org/).&lt;br /&gt;
* Reviewer recognition (coming to the February cycle!) - providing recognition letters to reviewers about their service (https://aclrollingreview.org/recognition/).&lt;br /&gt;
&lt;br /&gt;
== ARR: Looking ahead ==&lt;br /&gt;
&lt;br /&gt;
In 2022, ARR will be used by EMNLP and AACL-IJCNLP, INLG and SIGDIAL, and numerous *ACL workshops (https://aclrollingreview.org/dates).&lt;br /&gt;
&lt;br /&gt;
Future ARR initiatives will build on the current initiatives listed above. For example, we will soon roll out:&lt;br /&gt;
* Review quality assessment - we are starting an initiative where authors and AEs rate review quality. This data will feed into reviewer mentoring and reviewer recognition (identifying reviewers whose reviews are outstanding).&lt;br /&gt;
* Submissions-over-time statistics - what happens to NLP papers? For example, how often is a typical paper resubmitted before it is committed to a venue and accepted at a venue? &lt;br /&gt;
* Improved reviewer assignment - Graham Neubig continues to explore ways to improve reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
ARR has invited permanent Senior Action Editors, each responsible for a particular area of NLP. ARR is also inviting permanent ethics reviewers.&lt;br /&gt;
&lt;br /&gt;
The ACL exec will add more EiCs to ARR so that we can effectively distribute the load of managing each cycle as well as building out ethics reviewing, mentoring, and other ARR initiatives. The tech team will recruit a co-CTO.&lt;br /&gt;
&lt;br /&gt;
Finally, by agreement from the ACL exec, ARR will move to a six week cycle effective April 15 2022. This will give 9 cycles per year, allow all involved to finish one cycle before the next deadline, eliminate the exceptionally troublesome &amp;quot;December 15th&amp;quot; cycle, and give reviewers more breathing room. &lt;br /&gt;
&lt;br /&gt;
== Concerns and Challenges ==&lt;br /&gt;
&lt;br /&gt;
ARR is a paper-centric reviewing service, independent of any conference. It is not our job to provide acceptance/rejection decisions. Many authors feel unsure of this new process and understandably so. AEs and reviewers are also new to this rolling process and are concerned about their ongoing loads. Conference chairs are unsure of the new process and opt for hybrid mode instead of supporting ARR fully, again understandably so. However, overall this means we are constantly addressing concerns from multiple interests, some in conflict with each other (for example, authors who want immediate, in-depth reviews and reviewers who want low loads and lots of time to review). &lt;br /&gt;
&lt;br /&gt;
Moving to a new submission and review infrastructure (OpenReview) has proved challenging. The OpenReview team are very collegial and helpful, but the structure of ARR is unusual for OpenReview. It lacked many of the features and functionalities we needed to implement ARR, including some of the basic automations that are needed such as reminder emails,  automatic checking of resubmissions, etc. Consequently, our tech team has worked very hard to implement the various initiatives outlined above while we were rolling out ARR, including but not limited to AE assignment, reviewer assignment, etc. and to provide consistent conflict of interest detection and reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
In order for peer review to work, everyone must participate. Because the *ACL community skews young (see stats above), we must train junior reviewers and continue to involve more senior people in the reviewing process. To the extent that people think they can submit many papers, insist on high quality reviews, and yet provide no reviewing service themselves, any peer review system will fail. It is critical for experienced researchers to continue to participate in peer review.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75025</id>
		<title>2022Q1 Reports: ACL Rolling Review</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75025"/>
		<updated>2022-03-03T19:44:41Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== ARR: Looking Back == &lt;br /&gt;
&lt;br /&gt;
How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues. This committee consisted primarily of recent program chairs of big *ACL conferences, and was tasked with considering how to handle “the rapid growth of submissions”. The core problems were stated as following:&lt;br /&gt;
&lt;br /&gt;
* There are many submissions to major NLP conferences each year; the rate of increase at the time was exponential&lt;br /&gt;
* Low acceptance rates at major NLP conferences mean that we suspected that many submissions get resubmitted over and over&lt;br /&gt;
&lt;br /&gt;
In addition, arXiv increasingly interferes with the peer review process, with some authors complaining of bias in favor of arXiv preprints and authors complaining of the slowness of peer review and risks due to low acceptance rates at major NLP conferences.&lt;br /&gt;
&lt;br /&gt;
The committee came up with two sets of proposals: https://www.aclweb.org/portal/content/short-term-reform-proposals-acl-reviewing (June 2020) and https://www.aclweb.org/portal/content/long-term-reform-proposal-acl-reviewing (June 2020). &lt;br /&gt;
&lt;br /&gt;
To the core goal of ARR, cutting review load and increasing review consistency through R&amp;amp;R:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Venue &lt;br /&gt;
! 2020 &lt;br /&gt;
! 2021 &lt;br /&gt;
! 2022 &lt;br /&gt;
|-&lt;br /&gt;
| ACL &lt;br /&gt;
| 3429 &lt;br /&gt;
| 3350 &lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| NAACL &lt;br /&gt;
| - &lt;br /&gt;
| 1797 &lt;br /&gt;
| - &lt;br /&gt;
|-&lt;br /&gt;
| ARR &lt;br /&gt;
| - &lt;br /&gt;
| 3939 (through November) &lt;br /&gt;
| 2093 (incl December 2021) &lt;br /&gt;
|-&lt;br /&gt;
| ARR resubmissions &lt;br /&gt;
| - &lt;br /&gt;
| 221 (through November) &lt;br /&gt;
| 632 (December 2021/January 2022)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
853 submissions that would, without ARR, almost certainly have gone to new reviewers who would have had to write new reviews not knowing about the previous submissions/reviews, instead went to (in the majority of cases the same, but sometimes new) reviewers who had access to the previous reviews. It is important to stress that these numbers portray an incomplete picture of potential ARR gains, as ARR did not yet go through one complete year (that is, one full “conference season”). Once we reach the fall report, we will have a clearer picture regarding the overall reduction of the reviewing effort as we will know how many papers rejected from ACL/NAACL 2022 were re-committed to ACL-IJCNLP, EMNLP 2022 or another *ACL 2022 venue (*ACL workshops) without incurring new reviews.&lt;br /&gt;
&lt;br /&gt;
We do note that another of the short-term proposals, Findings, has been widely taken up, also contributing to reduced reviewing load as Findings papers are no longer eligible for submission to ARR nor *ACL venues. &lt;br /&gt;
&lt;br /&gt;
To the point about arXiv:&lt;br /&gt;
* ARR’s more frequent submission cycles allow authors to get feedback more quickly&lt;br /&gt;
* Almost 900 ARR submissions to-date opted in to be hosted as anonymous preprints&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Stats == &lt;br /&gt;
&lt;br /&gt;
===  Submissions and resubmissions ===&lt;br /&gt;
&lt;br /&gt;
Since the first deadline in May 2021, ARR has run 10 cycles and received a total of more than 6000 submissions including more than 800 resubmissions. Full statistics can be found on our [website](http://stats.aclrollingreview.org/).&lt;br /&gt;
&lt;br /&gt;
===  Authors and reviewers ===&lt;br /&gt;
&lt;br /&gt;
A total of 13801 unique OpenReview profiles have been associated with ARR to date, across authors, reviewers, action editors, tech team, and program chairs.&lt;br /&gt;
&lt;br /&gt;
==== Authors, author experience and authors as reviewers ====&lt;br /&gt;
&lt;br /&gt;
11787 unique author ids are associated with ARR submissions. The distribution of number of submissions across authors is shown in the chart below (5th percentile: 1; median: 1; mean: 1.95; 95th percentile: 5;  max: 46).&lt;br /&gt;
&lt;br /&gt;
[[File:Figure1.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for authors who have provided Semantic Scholar IDs (5th percentile: 1; median: 12; mean: 33; 95th percentile: 179). As expected, authors skew “junior”, with many having 0-5 publications - and of course, authors with no publications do not yet have Semantic Scholar profiles.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure2.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Reviewers, reviewer load and reviewer experience ====&lt;br /&gt;
&lt;br /&gt;
4391 unique reviewer ids are associated with ARR cycles to date. The distribution of number of reviews completed per reviewer who completed at least one review (including reviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 6; mean: 5.47; 95th percentile: 10;  max: 17). This means that the cumulative load per reviewer across 10 months is approximately 6 papers on average.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure3.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for reviewers who have provided Semantic Scholar IDs (5th percentile: 3; median: 20; mean: 40.26; 95th percentile: 170). Reviewers skew less junior than authors. When the ARR EiCs invite someone to review, either the person must have 5 publications with at least one in the past 5 years, or the person must be nominated by a senior action editor, action editor or EiC. However, the statistics here also include emergency reviewers, who may be more junior. &lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Action editors, action editor load  and action editor experience ====&lt;br /&gt;
&lt;br /&gt;
531 unique action editor ids are associated with ARR cycles to date. The distribution of number of metareviews completed per action editor who completed at least one metareview (including metareviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 10; mean: 8.67; 95th percentile: 14;  max: 20).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure5.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for action editors who have provided Semantic Scholar IDs (5th percentile: 5; median: 54; mean: 71.39; 95th percentile: 200). Action editors skew less junior than reviewers. Our initial pool of action editors was drawn from area chairs and senior area chairs from recent *ACL conferences plus workshop organizers in recent years, balancing as possible for diversity by geography and affiliation type (industry/academia).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure6.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Innovations ==&lt;br /&gt;
&lt;br /&gt;
With ARR (a single system and process for reviewing) we have been able to satisfy several of the short-term proposals for improving reviewing and roll out other initiatives, including:&lt;br /&gt;
* Anonymous preprints - hosting anonymous preprints for NLP papers through OpenReview. Almost 900 papers opted in to anonymous preprints through February 2022.&lt;br /&gt;
* Revise and resubmit - authors may revise and resubmit their papers, which we try to match to the previous reviewers (if available) unless the authors request new reviewers. Over 850 ARR submissions through January are resubmissions. See our stats dashboard for month-by-month breakdowns of resubmissions and requests for reviewer reassignment.&lt;br /&gt;
* Reviews as data (with Iryna Gurevych, Ilia Kuznetsov and Nils Dycke) - authors and reviewers may opt-in to provide reviews for research (see https://openreview.net/forum?id=28n-0nBPTch).&lt;br /&gt;
* Reviewer mentoring - providing mentoring to junior reviewers and training to all reviewers (see https://aclrollingreview.org/reviewertutorial).&lt;br /&gt;
* Responsible NLP research (with the NAACL program and reproducibility chairs) - providing a responsible NLP research checklist for authors, and training about ethics and reproducibility in NLP research (see https://aclrollingreview.org/responsibleNLPresearch/). Since December, all ARR submissions have completed the checklist. We understand the NAACL 2022 reproducibility chairs are working on an analysis.&lt;br /&gt;
* Ethics review (with the ACL ethics committee) - providing guidelines for ethics reviewing (https://aclrollingreview.org/ethicsreviewertutorial).&lt;br /&gt;
* Review statistics - providing statistics on the *ACL review process over time (http://stats.aclrollingreview.org/).&lt;br /&gt;
* Reviewer recognition (coming to the February cycle!) - providing recognition letters to reviewers about their service (https://aclrollingreview.org/recognition/).&lt;br /&gt;
&lt;br /&gt;
== ARR: Looking ahead ==&lt;br /&gt;
&lt;br /&gt;
In 2022, ARR will be used by EMNLP and AACL-IJCNLP, INLG and SIGDIAL, and numerous *ACL workshops (https://aclrollingreview.org/dates).&lt;br /&gt;
&lt;br /&gt;
Future ARR initiatives will build on the current initiatives listed above. For example, we will soon roll out:&lt;br /&gt;
* Review quality assessment - we are starting an initiative where authors and AEs rate review quality. This data will feed into reviewer mentoring and reviewer recognition (identifying reviewers whose reviews are outstanding).&lt;br /&gt;
* Submissions-over-time statistics - what happens to NLP papers? For example, how often is a typical paper resubmitted before it is committed to a venue and accepted at a venue? &lt;br /&gt;
* Improved reviewer assignment - Graham Neubig continues to explore ways to improve reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
ARR has invited permanent Senior Action Editors, each responsible for a particular area of NLP. ARR is also inviting permanent ethics reviewers.&lt;br /&gt;
&lt;br /&gt;
The ACL exec will add more EiCs to ARR so that we can effectively distribute the load of managing each cycle as well as building out ethics reviewing, mentoring, and other ARR initiatives. The tech team will recruit a co-CTO.&lt;br /&gt;
&lt;br /&gt;
Finally, by agreement from the ACL exec, ARR will move to a six week cycle effective April 15 2022. This will give 9 cycles per year, allow all involved to finish one cycle before the next deadline, eliminate the exceptionally troublesome &amp;quot;December 15th&amp;quot; cycle, and give reviewers more breathing room. &lt;br /&gt;
&lt;br /&gt;
== Concerns and Challenges ==&lt;br /&gt;
&lt;br /&gt;
ARR is a paper-centric reviewing service, independent of any conference. It is not our job to provide acceptance/rejection decisions. Many authors feel unsure of this new process and understandably so. AEs and reviewers are also new to this rolling process and are concerned about their ongoing loads. Conference chairs are unsure of the new process and opt for hybrid mode instead of supporting ARR fully, again understandably so. However, overall this means we are constantly addressing concerns from multiple interests, some in conflict with each other (for example, authors who want immediate, in-depth reviews and reviewers who want low loads and lots of time to review). &lt;br /&gt;
&lt;br /&gt;
Moving to a new submission and review infrastructure (OpenReview) has proved challenging. The OpenReview team are very collegial and helpful, but the structure of ARR is unusual for OpenReview. It lacked many of the features and functionalities we needed to implement ARR, including some of the basic automations that are needed such as reminder emails,  automatic checking of resubmissions, etc. Consequently, our tech team has worked very hard to implement the various initiatives outlined above while we were rolling out ARR, including but not limited to AE assignment, reviewer assignment, etc. and to provide consistent conflict of interest detection and reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
In order for peer review to work, everyone must participate. Because the *ACL community skews young (see stats above), we must train junior reviewers and continue to involve more senior people in the reviewing process. To the extent that people think they can submit many papers, insist on high quality reviews, and yet provide no reviewing service themselves, any peer review system will fail. It is critical for experienced researchers to continue to participate in peer review.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75024</id>
		<title>2022Q1 Reports: ACL Rolling Review</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75024"/>
		<updated>2022-03-03T19:43:53Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== ARR: Looking Back == &lt;br /&gt;
&lt;br /&gt;
How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues. This committee consisted primarily of recent program chairs of big *ACL conferences, and was tasked with considering how to handle “the rapid growth of submissions”. The core problems were stated as following:&lt;br /&gt;
&lt;br /&gt;
* There are many submissions to major NLP conferences each year; the rate of increase at the time was exponential&lt;br /&gt;
* Low acceptance rates at major NLP conferences mean that we suspected that many submissions get resubmitted over and over&lt;br /&gt;
&lt;br /&gt;
In addition, arXiv increasingly interferes with the peer review process, with some authors complaining of bias in favor of arXiv preprints and authors complaining of the slowness of peer review and risks due to low acceptance rates at major NLP conferences.&lt;br /&gt;
&lt;br /&gt;
The committee came up with two sets of proposals: https://www.aclweb.org/portal/content/short-term-reform-proposals-acl-reviewing (June 2020) and https://www.aclweb.org/portal/content/long-term-reform-proposal-acl-reviewing (June 2020). &lt;br /&gt;
&lt;br /&gt;
To the core goal of ARR, cutting review load and increasing review consistency through R&amp;amp;R:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Venue &lt;br /&gt;
! 2020 &lt;br /&gt;
! 2021 &lt;br /&gt;
! 2022 &lt;br /&gt;
|-&lt;br /&gt;
| ACL &lt;br /&gt;
| 3429 &lt;br /&gt;
| 3350 &lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| NAACL &lt;br /&gt;
| - &lt;br /&gt;
| 1797 &lt;br /&gt;
| - &lt;br /&gt;
|-&lt;br /&gt;
| ARR &lt;br /&gt;
| - &lt;br /&gt;
| 3939 (through November) &lt;br /&gt;
| 2093 (incl December 2021) &lt;br /&gt;
|-&lt;br /&gt;
| ARR resubmissions &lt;br /&gt;
| - &lt;br /&gt;
| 221 (through November) &lt;br /&gt;
| 632 (December 2021/January 2022)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
853 submissions that would, without ARR, almost certainly have gone to new reviewers who would have had to write new reviews not knowing about the previous submissions/reviews, instead went to (in the majority of cases the same, but sometimes new) reviewers who had access to the previous reviews. It is important to stress that these numbers portray an incomplete picture of potential ARR gains, as ARR did not yet go through one complete year (that is, one full “conference season”). Once we reach the fall report, we will have a clearer picture regarding the overall reduction of the reviewing effort as we will know how many papers rejected from ACL/NAACL 2022 were re-committed to ACL-IJCNLP, EMNLP 2022 or another *ACL 2022 venue (*ACL workshops) without incurring new reviews.&lt;br /&gt;
&lt;br /&gt;
We do note that another of the short-term proposals, Findings, has been widely taken up, also contributing to reduced reviewing load as Findings papers are no longer eligible for submission to ARR nor *ACL venues. &lt;br /&gt;
&lt;br /&gt;
To the point about arXiv:&lt;br /&gt;
* ARR’s more frequent submission cycles allow authors to get feedback more quickly&lt;br /&gt;
* Almost 900 ARR submissions to-date opted in to be hosted as anonymous preprints&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Stats == &lt;br /&gt;
&lt;br /&gt;
===  Submissions and resubmissions ===&lt;br /&gt;
&lt;br /&gt;
Since the first deadline in May 2021, ARR has run 10 cycles and received a total of more than 6000 submissions including more than 800 resubmissions. Full statistics can be found on our [website](http://stats.aclrollingreview.org/).&lt;br /&gt;
&lt;br /&gt;
===  Authors and reviewers ===&lt;br /&gt;
&lt;br /&gt;
A total of 13801 unique OpenReview profiles have been associated with ARR to date, across authors, reviewers, action editors, tech team, and program chairs.&lt;br /&gt;
&lt;br /&gt;
==== Authors, author experience and authors as reviewers ====&lt;br /&gt;
&lt;br /&gt;
11787 unique author ids are associated with ARR submissions. The distribution of number of submissions across authors is shown in the chart below (5th percentile: 1; median: 1; mean: 1.95; 95th percentile: 5;  max: 46).&lt;br /&gt;
&lt;br /&gt;
[[File:Figure1.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for authors who have provided Semantic Scholar IDs (5th percentile: 1; median: 12; mean: 33; 95th percentile: 179). As expected, authors skew “junior”, with many having 0-5 publications - and of course, authors with no publications do not yet have Semantic Scholar profiles.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure2.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Reviewers, reviewer load and reviewer experience ====&lt;br /&gt;
&lt;br /&gt;
4391 unique reviewer ids are associated with ARR cycles to date. The distribution of number of reviews completed per reviewer who completed at least one review (including reviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 6; mean: 5.47; 95th percentile: 10;  max: 17). This means that the cumulative load per reviewer across 10 months is approximately 6 papers on average.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure3.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for reviewers who have provided Semantic Scholar IDs (5th percentile: 3; median: 20; mean: 40.26; 95th percentile: 170). Reviewers skew less junior than authors. When the ARR EiCs invite someone to review, either the person must have 5 publications with at least one in the past 5 years, or the person must be nominated by a senior action editor, action editor or EiC. However, the statistics here also include emergency reviewers, who may be more junior. &lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Action editors, action editor load  and action editor experience ====&lt;br /&gt;
&lt;br /&gt;
531 unique action editor ids are associated with ARR cycles to date. The distribution of number of metareviews completed per action editor who completed at least one metareview (including metareviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 10; mean: 8.67; 95th percentile: 14;  max: 20).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure5.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for action editors who have provided Semantic Scholar IDs (5th percentile: 5; median: 54; mean: 71.39; 95th percentile: 200). Action editors skew less junior than reviewers. Our initial pool of action editors was drawn from area chairs and senior area chairs from recent *ACL conferences plus workshop organizers in recent years, balancing as possible for diversity by geography and affiliation type (industry/academia).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure6.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Innovations ==&lt;br /&gt;
&lt;br /&gt;
With ARR (a single system and process for reviewing) we have been able to satisfy several of the short-term proposals for improving reviewing and roll out other initiatives, including:&lt;br /&gt;
* Anonymous preprints - hosting anonymous preprints for NLP papers through OpenReview. Almost 900 papers opted in to anonymous preprints through February 2022.&lt;br /&gt;
* Revise and resubmit - authors may revise and resubmit their papers, which we try to match to the previous reviewers (if available) unless the authors request new reviewers. Over 850 ARR submissions through January are resubmissions. See our stats dashboard for month-by-month breakdowns of resubmissions and requests for reviewer reassignment.&lt;br /&gt;
* Reviews as data (with Iryna Gurevych, Ilia Kuznetsov and Nils Dycke) - authors and reviewers may opt-in to provide reviews for research (see https://openreview.net/forum?id=28n-0nBPTch).&lt;br /&gt;
* Reviewer mentoring - providing mentoring to junior reviewers and training to all reviewers (see https://aclrollingreview.org/reviewertutorial).&lt;br /&gt;
* Responsible NLP research (with the NAACL program and reproducibility chairs) - providing a responsible NLP research checklist for authors, and training about ethics and reproducibility in NLP research (see https://aclrollingreview.org/responsibleNLPresearch/). Since December, all ARR submissions have completed the checklist. We understand the NAACL 2022 reproducibility chairs are working on an analysis.&lt;br /&gt;
* Ethics review (with the ACL ethics committee) - providing guidelines for ethics reviewing (https://aclrollingreview.org/ethicsreviewertutorial).&lt;br /&gt;
* Review statistics - providing statistics on the *ACL review process over time (http://stats.aclrollingreview.org/).&lt;br /&gt;
* Reviewer recognition (coming to the February cycle!) - providing recognition letters to reviewers about their service (https://aclrollingreview.org/recognition/).&lt;br /&gt;
&lt;br /&gt;
== ARR: Looking ahead ==&lt;br /&gt;
&lt;br /&gt;
In 2022, ARR will be used by EMNLP and AACL-IJCNLP, INLG and SIGDIAL, and numerous *ACL workshops (https://aclrollingreview.org/dates).&lt;br /&gt;
&lt;br /&gt;
Future ARR initiatives will build on the current initiatives listed above. For example, we will soon roll out:&lt;br /&gt;
* Review quality assessment - we are starting an initiative where authors and AEs rate review quality. This data will feed into reviewer mentoring and reviewer recognition (identifying reviewers whose reviews are outstanding).&lt;br /&gt;
* Submissions-over-time statistics - what happens to NLP papers? For example, how often is a typical paper resubmitted before it is committed to a venue and accepted at a venue? &lt;br /&gt;
* Improved reviewer assignment - Graham Neubig continues to explore ways to improve reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
ARR has invited permanent Senior Action Editors, each responsible for a particular area of NLP. ARR is also inviting permanent ethics reviewers.&lt;br /&gt;
&lt;br /&gt;
The ACL exec will add more EiCs to ARR so that we can effectively distribute the load of managing each cycle as well as building out ethics reviewing, mentoring, and other ARR initiatives. The tech team will recruit a co-CTO.&lt;br /&gt;
&lt;br /&gt;
Finally, by agreement from the ACL exec, ARR will move to a six week cycle effective April 15 2022. This will give 9 cycles per year, allowing all involved to finish one cycle before the next deadline, eliminate the exceptionally troublesome &amp;quot;December 15th&amp;quot; cycle, and give reviewers more breathing room. &lt;br /&gt;
&lt;br /&gt;
== Concerns and Challenges ==&lt;br /&gt;
&lt;br /&gt;
ARR is a paper-centric reviewing service, independent of any conference. It is not our job to provide acceptance/rejection decisions. Many authors feel unsure of this new process and understandably so. AEs and reviewers are also new to this rolling process and are concerned about their ongoing loads. Conference chairs are unsure of the new process and opt for hybrid mode instead of supporting ARR fully, again understandably so. However, overall this means we are constantly addressing concerns from multiple interests, some in conflict with each other (for example, authors who want immediate, in-depth reviews and reviewers who want low loads and lots of time to review). &lt;br /&gt;
&lt;br /&gt;
Moving to a new submission and review infrastructure (OpenReview) has proved challenging. The OpenReview team are very collegial and helpful, but the structure of ARR is unusual for OpenReview. It lacked many of the features and functionalities we needed to implement ARR, including some of the basic automations that are needed such as reminder emails,  automatic checking of resubmissions, etc. Consequently, our tech team has worked very hard to implement the various initiatives outlined above while we were rolling out ARR, including but not limited to AE assignment, reviewer assignment, etc. and to provide consistent conflict of interest detection and reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
In order for peer review to work, everyone must participate. Because the *ACL community skews young (see stats above), we must train junior reviewers and continue to involve more senior people in the reviewing process. To the extent that people think they can submit many papers, insist on high quality reviews, and yet provide no reviewing service themselves, any peer review system will fail. It is critical for experienced researchers to continue to participate in peer review.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75023</id>
		<title>2022Q1 Reports: ACL Rolling Review</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75023"/>
		<updated>2022-03-03T19:36:36Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== ARR: Looking Back == &lt;br /&gt;
&lt;br /&gt;
How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues. This committee consisted primarily of recent program chairs of big *ACL conferences, and was tasked with considering how to handle “the rapid growth of submissions”. The core problems were stated as following:&lt;br /&gt;
&lt;br /&gt;
* There are many submissions to major NLP conferences each year; the rate of increase at the time was exponential&lt;br /&gt;
* Low acceptance rates at major NLP conferences mean that we suspected that many submissions get resubmitted over and over&lt;br /&gt;
&lt;br /&gt;
In addition, arXiv increasingly interferes with the peer review process, with some authors complaining of bias in favor of arXiv preprints and authors complaining of the slowness of peer review and risks due to low acceptance rates at major NLP conferences.&lt;br /&gt;
&lt;br /&gt;
The committee came up with two sets of proposals: https://www.aclweb.org/portal/content/short-term-reform-proposals-acl-reviewing (June 2020) and https://www.aclweb.org/portal/content/long-term-reform-proposal-acl-reviewing (June 2020). &lt;br /&gt;
&lt;br /&gt;
To the core goal of ARR, cutting review load and increasing review consistency through R&amp;amp;R:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Venue &lt;br /&gt;
! 2020 &lt;br /&gt;
! 2021 &lt;br /&gt;
! 2022 &lt;br /&gt;
|-&lt;br /&gt;
| ACL &lt;br /&gt;
| 3429 &lt;br /&gt;
| 3350 &lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| NAACL &lt;br /&gt;
| - &lt;br /&gt;
| 1797 &lt;br /&gt;
| - &lt;br /&gt;
|-&lt;br /&gt;
| ARR &lt;br /&gt;
| - &lt;br /&gt;
| 3939 (through November) &lt;br /&gt;
| 2093 (incl December 2021) &lt;br /&gt;
|-&lt;br /&gt;
| ARR resubmissions &lt;br /&gt;
| - &lt;br /&gt;
| 221 (through November) &lt;br /&gt;
| 632 (December 2021/January 2022)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
853 submissions that would, without ARR, almost certainly have gone to new reviewers who would have had to write new reviews not knowing about the previous submissions/reviews, instead went to (in the majority of cases the same, but sometimes new) reviewers who had access to the previous reviews. It is important to stress that these numbers portray an incomplete picture of potential ARR gains, as ARR did not yet go through one complete year (that is, one full “conference season”). Once we reach the fall report, we will have a clearer picture regarding the overall reduction of the reviewing effort as we will know how many papers rejected from ACL/NAACL 2022 were re-committed to ACL-IJCNLP, EMNLP 2022 or another *ACL 2022 venue (*ACL workshops) without incurring new reviews.&lt;br /&gt;
&lt;br /&gt;
We do note that another of the short-term proposals, Findings, has been widely taken up, also contributing to reduced reviewing load as Findings papers are no longer eligible for submission to ARR nor *ACL venues. &lt;br /&gt;
&lt;br /&gt;
To the point about arXiv:&lt;br /&gt;
* ARR’s more frequent submission cycles allow authors to get feedback more quickly&lt;br /&gt;
* Almost 900 ARR submissions to-date opted in to be hosted as anonymous preprints&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Stats == &lt;br /&gt;
&lt;br /&gt;
===  Submissions and resubmissions ===&lt;br /&gt;
&lt;br /&gt;
Since the first deadline in May 2021, ARR has run 10 cycles and received a total of more than 6000 submissions including more than 800 resubmissions. Full statistics can be found on our [website](http://stats.aclrollingreview.org/).&lt;br /&gt;
&lt;br /&gt;
===  Authors and reviewers ===&lt;br /&gt;
&lt;br /&gt;
A total of 13801 unique OpenReview profiles have been associated with ARR to date, across authors, reviewers, action editors, tech team, and program chairs.&lt;br /&gt;
&lt;br /&gt;
==== Authors, author experience and authors as reviewers ====&lt;br /&gt;
&lt;br /&gt;
11787 unique author ids are associated with ARR submissions. The distribution of number of submissions across authors is shown in the chart below (5th percentile: 1; median: 1; mean: 1.95; 95th percentile: 5;  max: 46).&lt;br /&gt;
&lt;br /&gt;
[[File:Figure1.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for authors who have provided Semantic Scholar IDs (5th percentile: 1; median: 12; mean: 33; 95th percentile: 179). As expected, authors skew “junior”, with many having 0-5 publications - and of course, authors with no publications do not yet have Semantic Scholar profiles.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure2.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Reviewers, reviewer load and reviewer experience ====&lt;br /&gt;
&lt;br /&gt;
4391 unique reviewer ids are associated with ARR cycles to date. The distribution of number of reviews completed per reviewer who completed at least one review (including reviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 6; mean: 5.47; 95th percentile: 10;  max: 17). This means that the cumulative load per reviewer across 10 months is approximately 6 papers on average.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure3.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for reviewers who have provided Semantic Scholar IDs (5th percentile: 3; median: 20; mean: 40.26; 95th percentile: 170). Reviewers skew less junior than authors. When the ARR EiCs invite someone to review, either the person must have 5 publications with at least one in the past 5 years, or the person must be nominated by a senior action editor, action editor or EiC. However, the statistics here also include emergency reviewers, who may be more junior. &lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Action editors, action editor load  and action editor experience ====&lt;br /&gt;
&lt;br /&gt;
531 unique action editor ids are associated with ARR cycles to date. The distribution of number of metareviews completed per action editor who completed at least one metareview (including metareviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 10; mean: 8.67; 95th percentile: 14;  max: 20).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure5.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for action editors who have provided Semantic Scholar IDs (5th percentile: 5; median: 54; mean: 71.39; 95th percentile: 200). Action editors skew less junior than reviewers. Our initial pool of action editors was drawn from area chairs and senior area chairs from recent *ACL conferences plus workshop organizers in recent years, balancing as possible for diversity by geography and affiliation type (industry/academia).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure6.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Innovations ==&lt;br /&gt;
&lt;br /&gt;
With ARR (a single system and process for reviewing) we have been able to satisfy several of the short-term proposals for improving reviewing and roll out other initiatives, including:&lt;br /&gt;
* Anonymous preprints - hosting anonymous preprints for NLP papers through OpenReview. Almost 900 papers opted in to anonymous preprints through February 2022.&lt;br /&gt;
* Revise and resubmit - authors may revise and resubmit their papers, which we try to match to the previous reviewers (if available) unless the authors request new reviewers. Over 850 ARR submissions through January are resubmissions. See our stats dashboard for month-by-month breakdowns of resubmissions and requests for reviewer reassignment.&lt;br /&gt;
* Reviews as data (with Iryna Gurevych, Ilia Kuznetsov and Nils Dycke) - authors and reviewers may opt-in to provide reviews for research (see https://openreview.net/forum?id=28n-0nBPTch).&lt;br /&gt;
* Reviewer mentoring - providing mentoring to junior reviewers and training to all reviewers (see https://aclrollingreview.org/reviewertutorial).&lt;br /&gt;
* Responsible NLP research (with the NAACL program and reproducibility chairs) - providing a responsible NLP research checklist for authors, and training about ethics and reproducibility in NLP research (see https://aclrollingreview.org/responsibleNLPresearch/). Since December, all ARR submissions have completed the checklist.&lt;br /&gt;
* Ethics review (with the ACL ethics committee) - providing guidelines for ethics reviewing (https://aclrollingreview.org/ethicsreviewertutorial).&lt;br /&gt;
* Review statistics - providing statistics on the *ACL review process over time.&lt;br /&gt;
* Reviewer recognition (coming to the February cycle!) - providing recognition letters to reviewers about their service.&lt;br /&gt;
&lt;br /&gt;
== ARR: Looking ahead ==&lt;br /&gt;
&lt;br /&gt;
In 2022, ARR will be used by EMNLP and AACL-IJCNLP, INLG and SIGDIAL, and numerous *ACL workshops.&lt;br /&gt;
&lt;br /&gt;
Future ARR initiatives will build on the current initiatives listed above. For example, we will soon roll out:&lt;br /&gt;
* Review quality assessment - we are starting an initiative where authors and AEs rate review quality. This data will feed into reviewer mentoring and reviewer recognition (identifying reviewers whose reviews are outstanding).&lt;br /&gt;
* Submissions-over-time statistics - what happens to NLP papers? For example, how often is a typical paper resubmitted before it is committed to a venue and accepted at a venue? &lt;br /&gt;
* Improved reviewer assignment - Graham Neubig continues to explore ways to improve reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
ARR has invited permanent Senior Action Editors, each responsible for a particular area of NLP. ARR is also inviting permanent ethics reviewers.&lt;br /&gt;
&lt;br /&gt;
The ACL exec will add more EiCs to ARR so that we can effectively distribute the load of managing each cycle as well as building out ethics reviewing, mentoring, and other ARR initiatives. The tech team will recruit a co-CTO.&lt;br /&gt;
&lt;br /&gt;
Finally, by agreement from the ACL exec, ARR will move to a six week cycle effective April 15 2022. This will give 9 cycles per year, allowing all involved to finish one cycle before the next deadline, eliminate the exceptionally troublesome &amp;quot;December 15th&amp;quot; cycle, and give reviewers more breathing room. &lt;br /&gt;
&lt;br /&gt;
== Concerns and Challenges ==&lt;br /&gt;
&lt;br /&gt;
ARR is a paper-centric reviewing service, independent of any conference. It is not our job to provide acceptance/rejection decisions. Many authors feel unsure of this new process and understandably so. AEs and reviewers are also new to this rolling process and are concerned about their ongoing loads. Conference chairs are unsure of the new process and opt for hybrid mode instead of supporting ARR fully, again understandably so. However, overall this means we are constantly addressing concerns from multiple interests, some in conflict with each other (for example, authors who want immediate, in-depth reviews and reviewers who want low loads and lots of time to review). &lt;br /&gt;
&lt;br /&gt;
Moving to a new submission and review infrastructure (OpenReview) has proved challenging. The OpenReview team are very collegial and helpful, but the structure of ARR is unusual for OpenReview. It lacked many of the features and functionalities we needed to implement ARR, including some of the basic automations that are needed such as reminder emails,  automatic checking of resubmissions, etc. Consequently, our tech team has worked very hard to implement the various initiatives outlined above while we were rolling out ARR, including but not limited to AE assignment, reviewer assignment, etc. and to provide consistent conflict of interest detection and reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
In order for peer review to work, everyone must participate. Because the *ACL community skews young (see stats above), we must train junior reviewers and continue to involve more senior people in the reviewing process. To the extent that people think they can submit many papers, insist on high quality reviews, and yet provide no reviewing service themselves, any peer review system will fail. It is critical for experienced researchers to continue to participate in peer review.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75022</id>
		<title>2022Q1 Reports: ACL Rolling Review</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75022"/>
		<updated>2022-03-03T19:35:31Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== ARR: Looking Back == &lt;br /&gt;
&lt;br /&gt;
How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues. This committee consisted primarily of recent program chairs of big *ACL conferences, and was tasked with considering how to handle “the rapid growth of submissions”. The core problems were stated as following:&lt;br /&gt;
&lt;br /&gt;
* There are many submissions to major NLP conferences each year; the rate of increase at the time was exponential&lt;br /&gt;
* Low acceptance rates at major NLP conferences mean that we suspected that many submissions get resubmitted over and over&lt;br /&gt;
&lt;br /&gt;
In addition, arXiv increasingly interferes with the peer review process, with some authors complaining of bias in favor of arXiv preprints and authors complaining of the slowness of peer review and risks due to low acceptance rates at major NLP conferences.&lt;br /&gt;
&lt;br /&gt;
The committee came up with two sets of proposals: https://www.aclweb.org/portal/content/short-term-reform-proposals-acl-reviewing (June 2020) and https://www.aclweb.org/portal/content/long-term-reform-proposal-acl-reviewing (June 2020). &lt;br /&gt;
&lt;br /&gt;
To the core goal of ARR, cutting review load and increasing review consistency through R&amp;amp;R:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Venue &lt;br /&gt;
! 2020 &lt;br /&gt;
! 2021 &lt;br /&gt;
! 2022 &lt;br /&gt;
|-&lt;br /&gt;
| ACL &lt;br /&gt;
| 3429 &lt;br /&gt;
| 3350 &lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| NAACL &lt;br /&gt;
| - &lt;br /&gt;
| 1797 &lt;br /&gt;
| - &lt;br /&gt;
|-&lt;br /&gt;
| ARR &lt;br /&gt;
| - &lt;br /&gt;
| 3939 (through November) &lt;br /&gt;
| 2093 (incl December 2021) &lt;br /&gt;
|-&lt;br /&gt;
| ARR resubmissions &lt;br /&gt;
| - &lt;br /&gt;
| 221 (through November) &lt;br /&gt;
| 632 (December 2021/January 2022)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
853 submissions that would, without ARR, almost certainly have gone to new reviewers who would have had to write new reviews not knowing about the previous submissions/reviews, instead went to (in the majority of cases the same, but sometimes new) reviewers who had access to the previous reviews. It is important to stress that these numbers portray an incomplete picture of potential ARR gains, as ARR did not yet go through one complete year (that is, one full “conference season”). Once we reach the fall report, we will have a clearer picture regarding the overall reduction of the reviewing effort as we will know how many papers rejected from ACL/NAACL 2022 were re-committed to ACL-IJCNLP, EMNLP 2022 or another *ACL 2022 venue (*ACL workshops) without incurring new reviews.&lt;br /&gt;
&lt;br /&gt;
We do note that another of the short-term proposals, Findings, has been widely taken up, also contributing to reduced reviewing load as Findings papers are no longer eligible for submission to ARR nor *ACL venues. &lt;br /&gt;
&lt;br /&gt;
To the point about arXiv:&lt;br /&gt;
* ARR’s more frequent submission cycles allow authors to get feedback more quickly&lt;br /&gt;
* Almost 900 ARR submissions to-date opted in to be hosted as anonymous preprints&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Stats == &lt;br /&gt;
&lt;br /&gt;
===  Submissions and resubmissions ===&lt;br /&gt;
&lt;br /&gt;
Since the first deadline in May 2021, ARR has run 10 cycles and received a total of more than 6000 submissions including more than 800 resubmissions. Full statistics can be found on our [website](http://stats.aclrollingreview.org/).&lt;br /&gt;
&lt;br /&gt;
===  Authors and reviewers ===&lt;br /&gt;
&lt;br /&gt;
A total of 13801 unique OpenReview profiles have been associated with ARR to date, across authors, reviewers, action editors, tech team, and program chairs.&lt;br /&gt;
&lt;br /&gt;
==== Authors, author experience and authors as reviewers ====&lt;br /&gt;
&lt;br /&gt;
11787 unique author ids are associated with ARR submissions. The distribution of number of submissions across authors is shown in the chart below (5th percentile: 1; median: 1; mean: 1.95; 95% percentile: 5;  max: 46).&lt;br /&gt;
&lt;br /&gt;
[[File:Figure1.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for authors who have provided Semantic Scholar IDs (5th percentile: 1; median: 12; mean: 33; 95% percentile: 179). As expected, authors skew “junior”, with many having 0-5 publications - and of course, authors with no publications do not yet have Semantic Scholar profiles.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure2.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Reviewers, reviewer load and reviewer experience ====&lt;br /&gt;
&lt;br /&gt;
4391 unique reviewer ids are associated with ARR cycles to date. The distribution of number of reviews completed per reviewer who completed at least one review (including reviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 6; mean: 5.47; 95% percentile: 10;  max: 17). This means that the cumulative load per reviewer across 10 months is approximately 6 papers on average.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure3.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for reviewers who have provided Semantic Scholar IDs (5th percentile: 3; median: 20; mean: 40.26; 95% percentile: 170). Reviewers skew less junior than authors. When the ARR EiCs invite someone to review, either the person must have 5 publications with at least one in the past 5 years, or the person must be nominated by a senior action editor, action editor or EiC. However, the statistics here also include emergency reviewers, who may be more junior. &lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Action editors, action editor load  and action editor experience ====&lt;br /&gt;
&lt;br /&gt;
531 unique action editor ids are associated with ARR cycles to date. The distribution of number of metareviews completed per action editor who completed at least one metareview (including metareviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 10; mean: 8.67; 95% percentile: 14;  max: 20).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure5.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for action editors who have provided Semantic Scholar IDs (5th percentile: 5; median: 54; mean: 71.39; 95% percentile: 200). Action editors skew less junior than reviewers. Our initial pool of action editors was drawn from area chairs and senior area chairs from recent *ACL conferences plus workshop organizers in recent years, balancing as possible for diversity by geography and affiliation type (industry/academia).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure6.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Innovations ==&lt;br /&gt;
&lt;br /&gt;
With ARR (a single system and process for reviewing) we have been able to satisfy several of the short-term proposals for improving reviewing and roll out other initiatives, including:&lt;br /&gt;
* Anonymous preprints - hosting anonymous preprints for NLP papers through OpenReview. Almost 900 papers opted in to anonymous preprints through February 2022.&lt;br /&gt;
* Revise and resubmit - authors may revise and resubmit their papers, which we try to match to the previous reviewers (if available) unless the authors request new reviewers. Over 850 ARR submissions through January are resubmissions. See our stats dashboard for month-by-month breakdowns of resubmissions and requests for reviewer reassignment.&lt;br /&gt;
* Reviews as data (with Iryna Gurevych, Ilia Kuznetsov and Nils Dycke) - authors and reviewers may opt-in to provide reviews for research (see https://openreview.net/forum?id=28n-0nBPTch).&lt;br /&gt;
* Reviewer mentoring - providing mentoring to junior reviewers and training to all reviewers (see https://aclrollingreview.org/reviewertutorial).&lt;br /&gt;
* Responsible NLP research (with the NAACL program and reproducibility chairs) - providing a responsible NLP research checklist for authors, and training about ethics and reproducibility in NLP research (see https://aclrollingreview.org/responsibleNLPresearch/). Since December, all ARR submissions have completed the checklist.&lt;br /&gt;
* Ethics review (with the ACL ethics committee) - providing guidelines for ethics reviewing (https://aclrollingreview.org/ethicsreviewertutorial).&lt;br /&gt;
* Review statistics - providing statistics on the *ACL review process over time.&lt;br /&gt;
* Reviewer recognition (coming to the February cycle!) - providing recognition letters to reviewers about their service.&lt;br /&gt;
&lt;br /&gt;
== ARR: Looking ahead ==&lt;br /&gt;
&lt;br /&gt;
In 2022, ARR will be used by EMNLP and AACL-IJCNLP, INLG and SIGDIAL, and numerous *ACL workshops.&lt;br /&gt;
&lt;br /&gt;
Future ARR initiatives will build on the current initiatives listed above. For example, we will soon roll out:&lt;br /&gt;
* Review quality assessment - we are starting an initiative where authors and AEs rate review quality. This data will feed into reviewer mentoring and reviewer recognition (identifying reviewers whose reviews are outstanding).&lt;br /&gt;
* Submissions-over-time statistics - what happens to NLP papers? For example, how often is a typical paper resubmitted before it is committed to a venue and accepted at a venue? &lt;br /&gt;
* Improved reviewer assignment - Graham Neubig continues to explore ways to improve reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
ARR has invited permanent Senior Action Editors, each responsible for a particular area of NLP. ARR is also inviting permanent ethics reviewers.&lt;br /&gt;
&lt;br /&gt;
The ACL exec will add more EiCs to ARR so that we can effectively distribute the load of managing each cycle as well as building out ethics reviewing, mentoring, and other ARR initiatives. The tech team will recruit a co-CTO.&lt;br /&gt;
&lt;br /&gt;
Finally, by agreement from the ACL exec, ARR will move to a six week cycle effective April 15 2022. This will give 9 cycles per year, allowing all involved to finish one cycle before the next deadline, eliminate the exceptionally troublesome &amp;quot;December 15th&amp;quot; cycle, and give reviewers more breathing room. &lt;br /&gt;
&lt;br /&gt;
== Concerns and Challenges ==&lt;br /&gt;
&lt;br /&gt;
ARR is a paper-centric reviewing service, independent of any conference. It is not our job to provide acceptance/rejection decisions. Many authors feel unsure of this new process and understandably so. AEs and reviewers are also new to this rolling process and are concerned about their ongoing loads. Conference chairs are unsure of the new process and opt for hybrid mode instead of supporting ARR fully, again understandably so. However, overall this means we are constantly addressing concerns from multiple interests, some in conflict with each other (for example, authors who want immediate, in-depth reviews and reviewers who want low loads and lots of time to review). &lt;br /&gt;
&lt;br /&gt;
Moving to a new submission and review infrastructure (OpenReview) has proved challenging. The OpenReview team are very collegial and helpful, but the structure of ARR is unusual for OpenReview. It lacked many of the features and functionalities we needed to implement ARR, including some of the basic automations that are needed such as reminder emails,  automatic checking of resubmissions, etc. Consequently, our tech team has worked very hard to implement the various initiatives outlined above while we were rolling out ARR, including but not limited to AE assignment, reviewer assignment, etc. and to provide consistent conflict of interest detection and reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
In order for peer review to work, everyone must participate. Because the *ACL community skews young (see stats above), we must train junior reviewers and continue to involve more senior people in the reviewing process. To the extent that people think they can submit many papers, insist on high quality reviews, and yet provide no reviewing service themselves, any peer review system will fail. It is critical for experienced researchers to continue to participate in peer review.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports&amp;diff=75017</id>
		<title>2022Q1 Reports</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports&amp;diff=75017"/>
		<updated>2022-03-01T21:20:15Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [[2022Q1 Agenda]] - Agenda for the Q1 teleconference&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[2022Q1 Reports: Office]] - Priscilla Rasmussen&lt;br /&gt;
* [[2022Q1 Reports: Secretary]] - Shiqi Zhao&lt;br /&gt;
* [[2022Q1 Reports: Treasurer]] - David Yarowsky&lt;br /&gt;
* [[2022Q1 Reports: Member at-large - EMEA]] - Anna Korhonen&lt;br /&gt;
* [[2022Q1 Reports: Member at-large - Asia/Pacific]] - Yusuke Miyao&lt;br /&gt;
* [[2022Q1 Reports: Member at-large - Americas]] - Mohit Bansal&lt;br /&gt;
* [[2022Q1 Reports: EACL]] - Shuly Wintner&lt;br /&gt;
* [[2022Q1 Reports: NAACL]] - Luciana Benotti&lt;br /&gt;
* [[2022Q1 Reports: AACL]] - Keh-Yih Su&lt;br /&gt;
* [[2022Q1 Reports: ACL 2022]] - Rada Mihalcea&lt;br /&gt;
* [[2022Q1 Reports: ACL 2023]] - Tim Baldwin&lt;br /&gt;
* [[2022Q1 Reports: ACL 2024]] - Iryna Gurevych&lt;br /&gt;
* [[2022Q1 Reports: ACL 2025]] - Emily M. Bender&lt;br /&gt;
* [[2022Q1 Reports: CL Journal]] - Hwee Tou Ng&lt;br /&gt;
* [[2022Q1 Reports: ACL Rolling Review]] - Amanda Stent, Goran Glavas, Sebastian Riedel, Pascale Fung&lt;br /&gt;
&lt;br /&gt;
* [[2022Q1 Reports: Information Director]] - Nitin Madnani&lt;br /&gt;
* [[2022Q1 Reports: TACL Journal]] - Brian Roark, Ani Nenkova&lt;br /&gt;
* [[2022Q1 Reports: Anthology Director]] - Matt Post&lt;br /&gt;
* [[2022Q1 Reports: Publicity Director]] - Barbara Plank&lt;br /&gt;
* [[2022Q1 Reports: PCC]] - Graeme Hirst, Donia Scott&lt;br /&gt;
* [[2022Q1 Reports: Equity Director]] - Natalie Schluter&lt;br /&gt;
* [[2022Q1 Reports: Sponsorship Director]]- Chris Callison-Burch&lt;br /&gt;
* [[2022Q1 Reports: Ethics Committee Co-chairs]]- Min-Yen Kan, Karën Fort, Yulia Tsvetkov&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[[2022Q1 Minutes (public version)]]&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75016</id>
		<title>2022Q1 Reports: ACL Rolling Review</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75016"/>
		<updated>2022-03-01T21:19:10Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== ARR: Looking Back == &lt;br /&gt;
&lt;br /&gt;
How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues. This committee consisted primarily of recent program chairs of big *ACL conferences, and was tasked with considering how to handle “the rapid growth of submissions”. The core problems were stated as following:&lt;br /&gt;
&lt;br /&gt;
* There are many submissions to major NLP conferences each year; the rate of increase at the time was exponential&lt;br /&gt;
* Low acceptance rates at major NLP conferences mean that we suspected that many submissions get resubmitted over and over&lt;br /&gt;
&lt;br /&gt;
In addition, arXiv increasingly interferes with the peer review process, with some authors complaining of bias in favor of arXiv preprints and authors complaining of the slowness of peer review and risks due to low acceptance rates at major NLP conferences.&lt;br /&gt;
&lt;br /&gt;
The committee came up with two sets of proposals: https://www.aclweb.org/portal/content/short-term-reform-proposals-acl-reviewing (June 2020) and https://www.aclweb.org/portal/content/long-term-reform-proposal-acl-reviewing (June 2020). &lt;br /&gt;
&lt;br /&gt;
To the core goal of ARR, cutting review load and increasing review consistency through R&amp;amp;R:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Venue &lt;br /&gt;
! 2020 &lt;br /&gt;
! 2021 &lt;br /&gt;
! 2022 &lt;br /&gt;
|-&lt;br /&gt;
| ACL &lt;br /&gt;
| 3429 &lt;br /&gt;
| 3350 &lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| NAACL &lt;br /&gt;
| - &lt;br /&gt;
| 1797 &lt;br /&gt;
| - &lt;br /&gt;
|-&lt;br /&gt;
| ARR &lt;br /&gt;
| - &lt;br /&gt;
| 3939 (through November) &lt;br /&gt;
| 2093 (incl December 2021) &lt;br /&gt;
|-&lt;br /&gt;
| ARR resubmissions &lt;br /&gt;
| - &lt;br /&gt;
| 221 (through November) &lt;br /&gt;
| 632 (December 2021/January 2022)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
853 submissions that would, without ARR, almost certainly have gone to new reviewers who would have had to write new reviews not knowing about the previous submissions/reviews, instead went to (in the majority of cases the same, but sometimes new) reviewers who had access to the previous reviews. It is important to stress that these numbers portray an incomplete picture of potential ARR gains, as ARR did not yet go through one complete year (that is, one full “conference season”). Once we reach the fall report, we will have a clearer picture regarding the overall reduction of the reviewing effort as we will know how many papers rejected from ACL/NAACL 2022 were re-committed to ACL-IJCNLP, EMNLP 2022 or another *ACL 2022 venue (*ACL workshops) without incurring new reviews.&lt;br /&gt;
&lt;br /&gt;
We do note that another of the short-term proposals, Findings, has been widely taken up, also contributing to reduced reviewing load as Findings papers are no longer eligible for submission to ARR nor *ACL venues. &lt;br /&gt;
&lt;br /&gt;
To the point about arXiv:&lt;br /&gt;
* ARR’s more frequent submission cycles allow authors to get feedback more quickly&lt;br /&gt;
* Almost 900 ARR submissions to-date opted in to be hosted as anonymous preprints&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Stats == &lt;br /&gt;
&lt;br /&gt;
===  Submissions and resubmissions ===&lt;br /&gt;
&lt;br /&gt;
Since the first deadline in May 2021, ARR has run 10 cycles and received a total of more than 6000 submissions including more than 800 resubmissions. Full statistics can be found on our [website](http://stats.aclrollingreview.org/).&lt;br /&gt;
&lt;br /&gt;
===  Authors and reviewers ===&lt;br /&gt;
&lt;br /&gt;
A total of 13801 unique OpenReview profiles have been associated with ARR to date, across authors, reviewers, action editors, tech team, and program chairs.&lt;br /&gt;
&lt;br /&gt;
==== Authors, author experience and authors as reviewers ====&lt;br /&gt;
&lt;br /&gt;
11787 unique author ids are associated with ARR submissions. The distribution of number of submissions across authors is shown in the chart below (5th percentile: 1; median: 1; mean: 1.95; 95% percentile: 5;  max: 46).&lt;br /&gt;
&lt;br /&gt;
[[File:Figure1.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for authors who have provided Semantic Scholar IDs (5th percentile: 1; median: 12; mean: 33; 95% percentile: 179). As expected, authors skew “junior”, with many having 0-5 publications - and of course, authors with no publications do not yet have Semantic Scholar profiles.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure2.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Reviewers, reviewer load and reviewer experience ====&lt;br /&gt;
&lt;br /&gt;
4391 unique reviewer ids are associated with ARR cycles to date. The distribution of number of reviews completed per reviewer who completed at least one review (including reviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 6; mean: 5.47; 95% percentile: 10;  max: 17). This means that the cumulative load per reviewer across 10 months is approximately 6 papers on average.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure3.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for reviewers who have provided Semantic Scholar IDs (5th percentile: 3; median: 20; mean: 40.26; 95% percentile: 170). Reviewers skew less junior than authors. When the ARR EiCs invite someone to review, either the person must have 5 publications with at least one in the past 5 years, or the person must be nominated by a senior action editor, action editor or EiC. However, the statistics here also include emergency reviewers, who may be more junior. &lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Action editors, action editor load  and action editor experience ====&lt;br /&gt;
&lt;br /&gt;
531 unique action editor ids are associated with ARR cycles to date. The distribution of number of metareviews completed per action editor who completed at least one metareview (including metareviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 10; mean: 8.67; 95% percentile: 14;  max: 20).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure5.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for action editors who have provided Semantic Scholar IDs (5th percentile: 5; median: 54; mean: 71.39; 95% percentile: 200). Action editors skew less junior than reviewers. Our initial pool of action editors was drawn from area chairs and senior area chairs from recent *ACL conferences plus workshop organizers in recent years, balancing as possible for diversity by geography and affiliation type (industry/academia).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure6.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Innovations ==&lt;br /&gt;
&lt;br /&gt;
With ARR (a single system and process for reviewing) we have been able to satisfy several of the short-term proposals for improving reviewing and roll out other initiatives, including:&lt;br /&gt;
* Anonymous preprints - hosting anonymous preprints for NLP papers through OpenReview. Almost 900 papers opted in to anonymous preprints through February 2022.&lt;br /&gt;
* Revise and resubmit - authors may revise and resubmit their papers, which we try to match to the previous reviewers (if available) unless the authors request new reviewers. Over 850 ARR submissions through January are resubmissions. See our stats dashboard for month-by-month breakdowns of resubmissions and requests for reviewer reassignment.&lt;br /&gt;
* Reviews as data (with Iryna Gurevych, Ilia Kuznetsov and Nils Dycke) - authors and reviewers may opt-in to provide reviews for research (see https://openreview.net/forum?id=28n-0nBPTch).&lt;br /&gt;
* Reviewer mentoring - providing mentoring to junior reviewers and training to all reviewers (see https://aclrollingreview.org/reviewertutorial).&lt;br /&gt;
* Responsible NLP research (with the NAACL program and reproducibility chairs) - providing a responsible NLP research checklist for authors, and training about ethics and reproducibility in NLP research (see https://aclrollingreview.org/responsibleNLPresearch/). Since December, all ARR submissions have completed the checklist.&lt;br /&gt;
* Ethics review (with the ACL ethics committee) - providing guidelines for ethics reviewing (https://aclrollingreview.org/ethicsreviewertutorial).&lt;br /&gt;
* Review statistics - providing statistics on the *ACL review process over time.&lt;br /&gt;
* Reviewer recognition (coming to the February cycle!) - providing recognition letters to reviewers about their service.&lt;br /&gt;
&lt;br /&gt;
== ARR: Looking ahead ==&lt;br /&gt;
&lt;br /&gt;
In 2022, ARR will be used by EMNLP and AACL-IJCNLP, INLG and SIGDIAL, and numerous *ACL workshops.&lt;br /&gt;
&lt;br /&gt;
Future ARR initiatives will build on the current initiatives listed above. For example, we will soon roll out:&lt;br /&gt;
* Review quality assessment - we are starting an initiative where authors and AEs rate review quality. This data will feed into reviewer mentoring and reviewer recognition (identifying reviewers whose reviews are outstanding).&lt;br /&gt;
* Submissions-over-time statistics - what happens to NLP papers? For example, how often is a typical paper resubmitted before it is committed to a venue and accepted at a venue? &lt;br /&gt;
* Improved reviewer assignment - Graham Neubig continues to explore ways to improve reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
ARR has invited permanent Senior Action Editors, each responsible for a particular area of NLP. ARR is also inviting permanent ethics reviewers.&lt;br /&gt;
&lt;br /&gt;
The ACL exec will add more EiCs to ARR so that we can effectively distribute the load of managing each cycle as well as building out ethics reviewing, mentoring, and other ARR initiatives. The tech team will recruit a co-CTO.&lt;br /&gt;
&lt;br /&gt;
== Concerns and Challenges ==&lt;br /&gt;
&lt;br /&gt;
ARR is a paper-centric reviewing service, independent of any conference. It is not our job to provide acceptance/rejection decisions. Many authors feel unsure of this new process and understandably so. AEs and reviewers are also new to this rolling process and are concerned about their ongoing loads. Conference chairs are unsure of the new process and opt for hybrid mode instead of supporting ARR fully, again understandably so. However, overall this means we are constantly addressing concerns from multiple interests, some in conflict with each other (for example, authors who want immediate, in-depth reviews and reviewers who want low loads and lots of time to review). &lt;br /&gt;
&lt;br /&gt;
Moving to a new submission and review infrastructure (OpenReview) has proved challenging. The OpenReview team are very collegial and helpful, but the structure of ARR is unusual for OpenReview. It lacked many of the features and functionalities we needed to implement ARR, including some of the basic automations that are needed such as reminder emails,  automatic checking of resubmissions, etc. Consequently, our tech team has worked very hard to implement the various initiatives outlined above while we were rolling out ARR, including but not limited to AE assignment, reviewer assignment, etc. and to provide consistent conflict of interest detection and reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
In order for peer review to work, everyone must participate. Because the *ACL community skews young (see stats above), we must train junior reviewers and continue to involve more senior people in the reviewing process. To the extent that people think they can submit many papers, insist on high quality reviews, and yet provide no reviewing service themselves, any peer review system will fail. It is critical for experienced researchers to continue to participate in peer review.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75015</id>
		<title>2022Q1 Reports: ACL Rolling Review</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75015"/>
		<updated>2022-03-01T21:01:42Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== ARR: Looking Back == &lt;br /&gt;
&lt;br /&gt;
How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues. This committee consisted primarily of recent program chairs of big *ACL conferences, and was tasked with considering how to handle “the rapid growth of submissions”. The core problems were stated as following:&lt;br /&gt;
&lt;br /&gt;
* There are many submissions to major NLP conferences each year; the rate of increase at the time was exponential&lt;br /&gt;
* Low acceptance rates at major NLP conferences mean that we suspected that many submissions get resubmitted over and over&lt;br /&gt;
&lt;br /&gt;
In addition, arXiv increasingly interferes with the peer review process, with some authors complaining of bias in favor of arXiv preprints and authors complaining of the slowness of peer review and risks due to low acceptance rates at major NLP conferences.&lt;br /&gt;
&lt;br /&gt;
The committee came up with two sets of proposals: https://www.aclweb.org/portal/content/short-term-reform-proposals-acl-reviewing (June 2020) and https://www.aclweb.org/portal/content/long-term-reform-proposal-acl-reviewing (June 2020). &lt;br /&gt;
&lt;br /&gt;
To the core goal of ARR, cutting review load and increasing review consistency through R&amp;amp;R:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Venue &lt;br /&gt;
! 2020 &lt;br /&gt;
! 2021 &lt;br /&gt;
! 2022 &lt;br /&gt;
|-&lt;br /&gt;
| ACL &lt;br /&gt;
| 3429 &lt;br /&gt;
| 3350 &lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| NAACL &lt;br /&gt;
| - &lt;br /&gt;
| 1797 &lt;br /&gt;
| - &lt;br /&gt;
|-&lt;br /&gt;
| ARR &lt;br /&gt;
| - &lt;br /&gt;
| 3939 (through November) &lt;br /&gt;
| 2093 (incl December 2021) &lt;br /&gt;
|-&lt;br /&gt;
| ARR resubmissions &lt;br /&gt;
| - &lt;br /&gt;
| 221 (through November) &lt;br /&gt;
| 632 (December 2021/January 2022)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
853 submissions that would, without ARR, almost certainly have gone to new reviewers who would have had to write new reviews not knowing about the previous submissions/reviews, instead went to (in the majority of cases the same, but sometimes new) reviewers who had access to the previous reviews. It is important to stress that these numbers portray an incomplete picture of potential ARR gains, as ARR did not yet go through one complete year (that is, one full “conference season”). Once we reach the fall report, we will have a clearer picture regarding the overall reduction of the reviewing effort as we will know how many papers rejected from ACL/NAACL 2022 were re-committed to ACL-IJCNLP, EMNLP 2022 or another *ACL 2022 venue (*ACL workshops) without incurring new reviews.&lt;br /&gt;
&lt;br /&gt;
We do note that another of the short-term proposals, Findings, has been widely taken up, also contributing to reduced reviewing load as Findings papers are no longer eligible for submission to ARR nor *ACL venues. &lt;br /&gt;
&lt;br /&gt;
To the point about arXiv:&lt;br /&gt;
* ARR’s more frequent submission cycles allow authors to get feedback more quickly&lt;br /&gt;
* 860 ARR submissions to-date opted in to be hosted as anonymous preprints&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Stats == &lt;br /&gt;
&lt;br /&gt;
===  Submissions and resubmissions ===&lt;br /&gt;
&lt;br /&gt;
Since the first deadline in May 2021, ARR has run 10 cycles and received a total of more than 6000 submissions including more than 800 resubmissions. Full statistics can be found on our [website](http://stats.aclrollingreview.org/).&lt;br /&gt;
&lt;br /&gt;
===  Authors and reviewers ===&lt;br /&gt;
&lt;br /&gt;
A total of 13801 unique OpenReview profiles have been associated with ARR to date, across authors, reviewers, action editors, tech team, and program chairs.&lt;br /&gt;
&lt;br /&gt;
==== Authors, author experience and authors as reviewers ====&lt;br /&gt;
&lt;br /&gt;
11787 unique author ids are associated with ARR submissions. The distribution of number of submissions across authors is shown in the chart below (5th percentile: 1; median: 1; mean: 1.95; 95% percentile: 5;  max: 46).&lt;br /&gt;
&lt;br /&gt;
[[File:Figure1.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for authors who have provided Semantic Scholar IDs (5th percentile: 1; median: 12; mean: 33; 95% percentile: 179). As expected, authors skew “junior”, with many having 0-5 publications - and of course, authors with no publications do not yet have Semantic Scholar profiles.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure2.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Reviewers, reviewer load and reviewer experience ====&lt;br /&gt;
&lt;br /&gt;
4391 unique reviewer ids are associated with ARR cycles to date. The distribution of number of reviews completed per reviewer who completed at least one review (including reviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 6; mean: 5.47; 95% percentile: 10;  max: 17). This means that the cumulative load per reviewer across 10 months is approximately 6 papers on average.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure3.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for reviewers who have provided Semantic Scholar IDs (5th percentile: 3; median: 20; mean: 40.26; 95% percentile: 170). Reviewers skew less junior than authors. When the ARR EiCs invite someone to review, either the person must have 5 publications with at least one in the past 5 years, or the person must be nominated by a senior action editor, action editor or EiC. However, the statistics here also include emergency reviewers, who may be more junior. &lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Action editors, action editor load  and action editor experience ====&lt;br /&gt;
&lt;br /&gt;
531 unique action editor ids are associated with ARR cycles to date. The distribution of number of metareviews completed per action editor who completed at least one metareview (including metareviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 10; mean: 8.67; 95% percentile: 14;  max: 20).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure5.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for action editors who have provided Semantic Scholar IDs (5th percentile: 5; median: 54; mean: 71.39; 95% percentile: 200). Action editors skew less junior than reviewers. Our initial pool of action editors was drawn from area chairs and senior area chairs from recent *ACL conferences plus workshop organizers in recent years, balancing as possible for diversity by geography and affiliation type (industry/academia).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure6.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Innovations ==&lt;br /&gt;
&lt;br /&gt;
With ARR (a single system and process for reviewing) we have been able to satisfy several of the short-term proposals for improving reviewing and roll out other initiatives, including:&lt;br /&gt;
* Anonymous preprints - hosting anonymous preprints for NLP papers through OpenReview. Almost 900 papers opted in to anonymous preprints through February 2022.&lt;br /&gt;
* Revise and resubmit - authors may revise and resubmit their papers, which we try to match to the previous reviewers (if available) unless the authors request new reviewers. Over 850 ARR submissions through January are resubmissions. See our stats dashboard for month-by-month breakdowns of resubmissions and requests for reviewer reassignment.&lt;br /&gt;
* Reviews as data (with Iryna Gurevych, Ilia Kuznetsov and Nils Dycke) - authors and reviewers may opt-in to provide reviews for research (see https://openreview.net/forum?id=28n-0nBPTch).&lt;br /&gt;
* Reviewer mentoring - providing mentoring to junior reviewers and training to all reviewers (see https://aclrollingreview.org/reviewertutorial).&lt;br /&gt;
* Responsible NLP research (with the NAACL program and reproducibility chairs) - providing a responsible NLP research checklist for authors, and training about ethics and reproducibility in NLP research (see https://aclrollingreview.org/responsibleNLPresearch/). Since December, all ARR submissions have completed the checklist.&lt;br /&gt;
* Ethics review (with the ACL ethics committee) - providing guidelines for ethics reviewing (https://aclrollingreview.org/ethicsreviewertutorial).&lt;br /&gt;
* Review statistics - providing statistics on the *ACL review process over time.&lt;br /&gt;
* Reviewer recognition (coming to the February cycle!) - providing recognition letters to reviewers about their service.&lt;br /&gt;
&lt;br /&gt;
== ARR: Looking ahead ==&lt;br /&gt;
&lt;br /&gt;
In 2022, ARR will be used by EMNLP and AACL-IJCNLP, INLG and SIGDIAL, and numerous *ACL workshops.&lt;br /&gt;
&lt;br /&gt;
Future ARR initiatives will build on the current initiatives listed above. For example, we will soon roll out:&lt;br /&gt;
* Review quality assessment - we are starting an initiative where authors and AEs rate review quality. This data will feed into reviewer mentoring and reviewer recognition (identifying reviewers whose reviews are outstanding).&lt;br /&gt;
* Submissions-over-time statistics - what happens to NLP papers? For example, how often is a typical paper resubmitted before it is committed to a venue and accepted at a venue? &lt;br /&gt;
* Improved reviewer assignment - Graham Neubig continues to explore ways to improve reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
ARR has invited permanent Senior Action Editors, each responsible for a particular area of NLP. ARR is also inviting permanent ethics reviewers.&lt;br /&gt;
&lt;br /&gt;
The ACL exec will add more EiCs to ARR so that we can effectively distribute the load of managing each cycle as well as building out ethics reviewing, mentoring, and other ARR initiatives. The tech team will recruit a co-CTO.&lt;br /&gt;
&lt;br /&gt;
== Concerns and Challenges ==&lt;br /&gt;
&lt;br /&gt;
ARR is a paper-centric reviewing service, independent of any conference. It is not our job to provide acceptance/rejection decisions. Many authors feel unsure of this new process and understandably so. AEs and reviewers are also new to this rolling process and are concerned about their ongoing loads. Conference chairs are unsure of the new process and opt for hybrid mode instead of supporting ARR fully, again understandably so. However, overall this means we are constantly addressing concerns from multiple interests, some in conflict with each other (for example, authors who want immediate, in-depth reviews and reviewers who want low loads and lots of time to review). &lt;br /&gt;
&lt;br /&gt;
Moving to a new submission and review infrastructure (OpenReview) has proved challenging. The OpenReview team are very collegial and helpful, but the structure of ARR is unusual for OpenReview. It lacked many of the features and functionalities we needed to implement ARR, including some of the basic automations that are needed such as reminder emails,  automatic checking of resubmissions, etc. Consequently, our tech team has worked very hard to implement the various initiatives outlined above while we were rolling out ARR, including but not limited to AE assignment, reviewer assignment, etc. and to provide consistent conflict of interest detection and reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
In order for peer review to work, everyone must participate. Because the *ACL community skews young (see stats above), we must train junior reviewers and continue to involve more senior people in the reviewing process. To the extent that people think they can submit many papers, insist on high quality reviews, and yet provide no reviewing service themselves, any peer review system will fail. It is critical for experienced researchers to continue to participate in peer review.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75014</id>
		<title>2022Q1 Reports: ACL Rolling Review</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75014"/>
		<updated>2022-03-01T21:00:41Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== ARR: Looking Back == &lt;br /&gt;
&lt;br /&gt;
How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues. This committee consisted primarily of recent program chairs of big *ACL conferences, and was tasked with considering how to handle “the rapid growth of submissions”. The core problems were stated as following:&lt;br /&gt;
&lt;br /&gt;
* There are many submissions to major NLP conferences each year; the rate of increase at the time was exponential&lt;br /&gt;
* Low acceptance rates at major NLP conferences mean that we suspected that many submissions get resubmitted over and over&lt;br /&gt;
&lt;br /&gt;
In addition, arXiv increasingly interferes with the peer review process, with some authors complaining of bias in favor of arXiv preprints and authors complaining of the slowness of peer review and risks due to low acceptance rates at major NLP conferences.&lt;br /&gt;
&lt;br /&gt;
The committee came up with two sets of proposals: https://www.aclweb.org/portal/content/short-term-reform-proposals-acl-reviewing (June 2020) and https://www.aclweb.org/portal/content/long-term-reform-proposal-acl-reviewing (June 2020). &lt;br /&gt;
&lt;br /&gt;
To the core goal of ARR, cutting review load and increasing review consistency through R&amp;amp;R:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Venue &lt;br /&gt;
! 2020 &lt;br /&gt;
! 2021 &lt;br /&gt;
! 2022 &lt;br /&gt;
|-&lt;br /&gt;
| ACL &lt;br /&gt;
| 3429 &lt;br /&gt;
| 3350 &lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| NAACL &lt;br /&gt;
| - &lt;br /&gt;
| 1797 &lt;br /&gt;
| - &lt;br /&gt;
|-&lt;br /&gt;
| ARR &lt;br /&gt;
| - &lt;br /&gt;
| 3939 (through November) &lt;br /&gt;
| 2093 (incl December 2021) &lt;br /&gt;
|-&lt;br /&gt;
| ARR resubmissions &lt;br /&gt;
| - &lt;br /&gt;
| 221 (through November) &lt;br /&gt;
| 632 (December 2021/January 2022)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
853 submissions that would, without ARR, almost certainly have gone to new reviewers who would have had to write new reviews not knowing about the previous submissions/reviews, instead went to (in the majority of cases the same, but sometimes new) reviewers who had access to the previous reviews. It is important to stress that these numbers portray an incomplete picture of potential ARR gains, as ARR did not yet go through one complete year (that is, one full “conference season”). Once we reach the fall report, we will have a clearer picture regarding the overall reduction of the reviewing effort as we will know how many papers rejected from ACL/NAACL 2022 were re-committed to ACL-IJCNLP, EMNLP 2022 or another *ACL 2022 venue (*ACL workshops) without incurring new reviews.&lt;br /&gt;
&lt;br /&gt;
We do note that another of the short-term proposals, Findings, has been widely taken up, also contributing to reduced reviewing load as Findings papers are no longer eligible for submission to ARR nor *ACL venues. &lt;br /&gt;
&lt;br /&gt;
To the point about arXiv:&lt;br /&gt;
* ARR’s more frequent submission cycles allow authors to get feedback more quickly&lt;br /&gt;
* 860 ARR submissions to-date opted in to be hosted as anonymous preprints&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Stats == &lt;br /&gt;
&lt;br /&gt;
===  Submissions and resubmissions ===&lt;br /&gt;
&lt;br /&gt;
Since the first deadline in May 2021, ARR has run 10 cycles and received a total of more than 6000 submissions including more than 800 resubmissions. Full statistics can be found on our [website](http://stats.aclrollingreview.org/).&lt;br /&gt;
&lt;br /&gt;
===  Authors and reviewers ===&lt;br /&gt;
&lt;br /&gt;
A total of 13801 unique OpenReview profiles have been associated with ARR to date, across authors, reviewers, action editors, tech team, and program chairs.&lt;br /&gt;
&lt;br /&gt;
==== Authors, author experience and authors as reviewers ====&lt;br /&gt;
&lt;br /&gt;
11787 unique author ids are associated with ARR submissions. The distribution of number of submissions across authors is shown in the chart below (5th percentile: 1; median: 1; mean: 1.95; 95% percentile: 5;  max: 46).&lt;br /&gt;
&lt;br /&gt;
[[File:Figure1.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for authors who have provided Semantic Scholar IDs (5th percentile: 1; median: 12; mean: 33; 95% percentile: 179). As expected, authors skew “junior”, with many having 0-5 publications - and of course, authors with no publications do not yet have Semantic Scholar profiles.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure2.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Reviewers, reviewer load and reviewer experience ====&lt;br /&gt;
&lt;br /&gt;
4391 unique reviewer ids are associated with ARR cycles to date. The distribution of number of reviews completed per reviewer who completed at least one review (including reviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 6; mean: 5.47; 95% percentile: 10;  max: 17). This means that the cumulative load per reviewer across 10 months is approximately 6 papers on average.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure3.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for reviewers who have provided Semantic Scholar IDs (5th percentile: 3; median: 20; mean: 40.26; 95% percentile: 170). Reviewers skew less junior than authors. When the ARR EiCs invite someone to review, either the person must have 5 publications with at least one in the past 5 years, or the person must be nominated by a senior action editor, action editor or EiC. However, the statistics here also include emergency reviewers, who may be more junior. &lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure4.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Action editors, action editor load  and action editor experience ====&lt;br /&gt;
&lt;br /&gt;
531 unique action editor ids are associated with ARR cycles to date. The distribution of number of metareviews completed per action editor who completed at least one metareview (including metareviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 10; mean: 8.67; 95% percentile: 14;  max: 20).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure5.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for action editors who have provided Semantic Scholar IDs (5th percentile: 5; median: 54; mean: 71.39; 95% percentile: 200). Action editors skew less junior than reviewers. Our initial pool of action editors was drawn from area chairs and senior area chairs from recent *ACL conferences plus workshop organizers in recent years, balancing as possible for diversity by geography and affiliation type (industry/academia).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure6.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Innovations ==&lt;br /&gt;
&lt;br /&gt;
With ARR (a single system and process for reviewing) we have been able to satisfy several of the short-term proposals for improving reviewing and roll out other initiatives, including:&lt;br /&gt;
* Anonymous preprints - hosting anonymous preprints for NLP papers through OpenReview. 875 papers opted in to anonymous preprints through January 2022.&lt;br /&gt;
* Revise and resubmit - authors may revise and resubmit their papers, which we try to match to the previous reviewers (if available) unless the authors request new reviewers. 853 ARR submissions through January are resubmissions. See our stats dashboard for month-by-month breakdowns of resubmissions and requests for reviewer reassignment.&lt;br /&gt;
* Reviews as data (with Iryna Gurevych, Ilia Kuznetsov and Nils Dycke) - authors and reviewers may opt-in to provide reviews for research (see https://openreview.net/forum?id=28n-0nBPTch).&lt;br /&gt;
* Reviewer mentoring - providing mentoring to junior reviewers and training to all reviewers (see https://aclrollingreview.org/reviewertutorial).&lt;br /&gt;
* Responsible NLP research (with the NAACL program and reproducibility chairs) - providing a responsible NLP research checklist for authors, and training about ethics and reproducibility in NLP research (see https://aclrollingreview.org/responsibleNLPresearch/). Since December, all ARR submissions have completed the checklist.&lt;br /&gt;
* Ethics review (with the ACL ethics committee) - providing guidelines for ethics reviewing (https://aclrollingreview.org/ethicsreviewertutorial).&lt;br /&gt;
* Review statistics - providing statistics on the *ACL review process over time.&lt;br /&gt;
* Reviewer recognition (coming to the February cycle!) - providing recognition letters to reviewers about their service.&lt;br /&gt;
&lt;br /&gt;
== ARR: Looking ahead ==&lt;br /&gt;
&lt;br /&gt;
In 2022, ARR will be used by EMNLP and AACL-IJCNLP, INLG and SIGDIAL, and numerous *ACL workshops.&lt;br /&gt;
&lt;br /&gt;
Future ARR initiatives will build on the current initiatives listed above. For example, we will soon roll out:&lt;br /&gt;
* Review quality assessment - we are starting an initiative where authors and AEs rate review quality. This data will feed into reviewer mentoring and reviewer recognition (identifying reviewers whose reviews are outstanding).&lt;br /&gt;
* Submissions-over-time statistics - what happens to NLP papers? For example, how often is a typical paper resubmitted before it is committed to a venue and accepted at a venue? &lt;br /&gt;
* Improved reviewer assignment - Graham Neubig continues to explore ways to improve reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
ARR has invited permanent Senior Action Editors, each responsible for a particular area of NLP. ARR is also inviting permanent ethics reviewers.&lt;br /&gt;
&lt;br /&gt;
The ACL exec will add more EiCs to ARR so that we can effectively distribute the load of managing each cycle as well as building out ethics reviewing, mentoring, and other ARR initiatives. The tech team will recruit a co-CTO.&lt;br /&gt;
&lt;br /&gt;
== Concerns and Challenges ==&lt;br /&gt;
&lt;br /&gt;
ARR is a paper-centric reviewing service, independent of any conference. It is not our job to provide acceptance/rejection decisions. Many authors feel unsure of this new process and understandably so. AEs and reviewers are also new to this rolling process and are concerned about their ongoing loads. Conference chairs are unsure of the new process and opt for hybrid mode instead of supporting ARR fully, again understandably so. However, overall this means we are constantly addressing concerns from multiple interests, some in conflict with each other (for example, authors who want immediate, in-depth reviews and reviewers who want low loads and lots of time to review). &lt;br /&gt;
&lt;br /&gt;
Moving to a new submission and review infrastructure (OpenReview) has proved challenging. The OpenReview team are very collegial and helpful, but the structure of ARR is unusual for OpenReview. It lacked many of the features and functionalities we needed to implement ARR, including some of the basic automations that are needed such as reminder emails,  automatic checking of resubmissions, etc. Consequently, our tech team has worked very hard to implement the various initiatives outlined above while we were rolling out ARR, including but not limited to AE assignment, reviewer assignment, etc. and to provide consistent conflict of interest detection and reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
In order for peer review to work, everyone must participate. Because the *ACL community skews young (see stats above), we must train junior reviewers and continue to involve more senior people in the reviewing process. To the extent that people think they can submit many papers, insist on high quality reviews, and yet provide no reviewing service themselves, any peer review system will fail. It is critical for experienced researchers to continue to participate in peer review.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75013</id>
		<title>2022Q1 Reports: ACL Rolling Review</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75013"/>
		<updated>2022-03-01T20:58:36Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== ARR: Looking Back == &lt;br /&gt;
&lt;br /&gt;
How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues. This committee consisted primarily of recent program chairs of big *ACL conferences, and was tasked with considering how to handle “the rapid growth of submissions”. The core problems were stated as following:&lt;br /&gt;
&lt;br /&gt;
* There are many submissions to major NLP conferences each year; the rate of increase at the time was exponential&lt;br /&gt;
* Low acceptance rates at major NLP conferences mean that we suspected that many submissions get resubmitted over and over&lt;br /&gt;
&lt;br /&gt;
In addition, arXiv increasingly interferes with the peer review process, with some authors complaining of bias in favor of arXiv preprints and authors complaining of the slowness of peer review and risks due to low acceptance rates at major NLP conferences.&lt;br /&gt;
&lt;br /&gt;
The committee came up with two sets of proposals: https://www.aclweb.org/portal/content/short-term-reform-proposals-acl-reviewing (June 2020) and https://www.aclweb.org/portal/content/long-term-reform-proposal-acl-reviewing (June 2020). &lt;br /&gt;
&lt;br /&gt;
To the core goal of ARR, cutting review load and increasing review consistency through R&amp;amp;R:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Venue &lt;br /&gt;
! 2020 &lt;br /&gt;
! 2021 &lt;br /&gt;
! 2022 &lt;br /&gt;
|-&lt;br /&gt;
| ACL &lt;br /&gt;
| 3429 &lt;br /&gt;
| 3350 &lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| NAACL &lt;br /&gt;
| - &lt;br /&gt;
| 1797 &lt;br /&gt;
| - &lt;br /&gt;
|-&lt;br /&gt;
| ARR &lt;br /&gt;
| - &lt;br /&gt;
| 3939 (through November) &lt;br /&gt;
| 2093 (incl December 2021) &lt;br /&gt;
|-&lt;br /&gt;
| ARR resubmissions &lt;br /&gt;
| - &lt;br /&gt;
| 221 (through November) &lt;br /&gt;
| 632 (December 2021/January 2022)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
853 submissions that would, without ARR, almost certainly have gone to new reviewers who would have had to write new reviews not knowing about the previous submissions/reviews, instead went to (in the majority of cases the same, but sometimes new) reviewers who had access to the previous reviews. It is important to stress that these numbers portray an incomplete picture of potential ARR gains, as ARR did not yet go through one complete year (that is, one full “conference season”). Once we reach the fall report, we will have a clearer picture regarding the overall reduction of the reviewing effort as we will know how many papers rejected from ACL/NAACL 2022 were re-committed to ACL-IJCNLP, EMNLP 2022 or another *ACL 2022 venue (*ACL workshops) without incurring new reviews.&lt;br /&gt;
&lt;br /&gt;
We do note that another of the short-term proposals, Findings, has been widely taken up, also contributing to reduced reviewing load as Findings papers are no longer eligible for submission to ARR nor *ACL venues. &lt;br /&gt;
&lt;br /&gt;
To the point about arXiv:&lt;br /&gt;
* ARR’s more frequent submission cycles allow authors to get feedback more quickly&lt;br /&gt;
* 860 ARR submissions to-date opted in to be hosted as anonymous preprints&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Stats == &lt;br /&gt;
&lt;br /&gt;
===  Submissions and resubmissions ===&lt;br /&gt;
&lt;br /&gt;
Since the first deadline in May 2021, ARR has run 10 cycles and received a total of more than 6000 submissions including more than 800 resubmissions. Full statistics can be found on our [website](http://stats.aclrollingreview.org/).&lt;br /&gt;
&lt;br /&gt;
===  Authors and reviewers ===&lt;br /&gt;
&lt;br /&gt;
A total of 13801 unique OpenReview profiles have been associated with ARR to date, across authors, reviewers, action editors, tech team, and program chairs.&lt;br /&gt;
&lt;br /&gt;
==== Authors, author experience and authors as reviewers ====&lt;br /&gt;
&lt;br /&gt;
11787 unique author ids are associated with ARR submissions. The distribution of number of submissions across authors is shown in the chart below (5th percentile: 1; median: 1; mean: 1.95; 95% percentile: 5;  max: 46).&lt;br /&gt;
&lt;br /&gt;
[[File:Figure1.PNG]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for authors who have provided Semantic Scholar IDs (5th percentile: 1; median: 12; mean: 33; 95% percentile: 179). As expected, authors skew “junior”, with many having 0-5 publications - and of course, authors with no publications do not yet have Semantic Scholar profiles.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure2.PNG]]&lt;br /&gt;
&lt;br /&gt;
==== Reviewers, reviewer load and reviewer experience ====&lt;br /&gt;
&lt;br /&gt;
4391 unique reviewer ids are associated with ARR cycles to date. The distribution of number of reviews completed per reviewer who completed at least one review (including reviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 6; mean: 5.47; 95% percentile: 10;  max: 17). This means that the cumulative load per reviewer across 10 months is approximately 6 papers on average.&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure3.PNG]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for reviewers who have provided Semantic Scholar IDs (5th percentile: 3; median: 20; mean: 40.26; 95% percentile: 170). Reviewers skew less junior than authors. When the ARR EiCs invite someone to review, either the person must have 5 publications with at least one in the past 5 years, or the person must be nominated by a senior action editor, action editor or EiC. However, the statistics here also include emergency reviewers, who may be more junior. &lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure4.PNG]]&lt;br /&gt;
&lt;br /&gt;
==== Action editors, action editor load  and action editor experience ====&lt;br /&gt;
&lt;br /&gt;
531 unique action editor ids are associated with ARR cycles to date. The distribution of number of metareviews completed per action editor who completed at least one metareview (including metareviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 10; mean: 8.67; 95% percentile: 14;  max: 20).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure5.PNG]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for action editors who have provided Semantic Scholar IDs (5th percentile: 5; median: 54; mean: 71.39; 95% percentile: 200). Action editors skew less junior than reviewers. Our initial pool of action editors was drawn from area chairs and senior area chairs from recent *ACL conferences plus workshop organizers in recent years, balancing as possible for diversity by geography and affiliation type (industry/academia).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure6.PNG]]&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Innovations ==&lt;br /&gt;
&lt;br /&gt;
With ARR (a single system and process for reviewing) we have been able to satisfy several of the short-term proposals for improving reviewing and roll out other initiatives, including:&lt;br /&gt;
* Anonymous preprints - hosting anonymous preprints for NLP papers through OpenReview. 875 papers opted in to anonymous preprints through January 2022.&lt;br /&gt;
* Revise and resubmit - authors may revise and resubmit their papers, which we try to match to the previous reviewers (if available) unless the authors request new reviewers. 853 ARR submissions through January are resubmissions. See our stats dashboard for month-by-month breakdowns of resubmissions and requests for reviewer reassignment.&lt;br /&gt;
* Reviews as data (with Iryna Gurevych, Ilia Kuznetsov and Nils Dycke) - authors and reviewers may opt-in to provide reviews for research (see https://openreview.net/forum?id=28n-0nBPTch).&lt;br /&gt;
* Reviewer mentoring - providing mentoring to junior reviewers and training to all reviewers (see https://aclrollingreview.org/reviewertutorial).&lt;br /&gt;
* Responsible NLP research (with the NAACL program and reproducibility chairs) - providing a responsible NLP research checklist for authors, and training about ethics and reproducibility in NLP research (see https://aclrollingreview.org/responsibleNLPresearch/). Since December, all ARR submissions have completed the checklist.&lt;br /&gt;
* Ethics review (with the ACL ethics committee) - providing guidelines for ethics reviewing (https://aclrollingreview.org/ethicsreviewertutorial).&lt;br /&gt;
* Review statistics - providing statistics on the *ACL review process over time.&lt;br /&gt;
* Reviewer recognition (coming to the February cycle!) - providing recognition letters to reviewers about their service.&lt;br /&gt;
&lt;br /&gt;
== ARR: Looking ahead ==&lt;br /&gt;
&lt;br /&gt;
In 2022, ARR will be used by EMNLP and AACL-IJCNLP, INLG and SIGDIAL, and numerous *ACL workshops.&lt;br /&gt;
&lt;br /&gt;
Future ARR initiatives will build on the current initiatives listed above. For example, we will soon roll out:&lt;br /&gt;
* Review quality assessment - we are starting an initiative where authors and AEs rate review quality. This data will feed into reviewer mentoring and reviewer recognition (identifying reviewers whose reviews are outstanding).&lt;br /&gt;
* Submissions-over-time statistics - what happens to NLP papers? For example, how often is a typical paper resubmitted before it is committed to a venue and accepted at a venue? &lt;br /&gt;
* Improved reviewer assignment - Graham Neubig continues to explore ways to improve reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
ARR has invited permanent Senior Action Editors, each responsible for a particular area of NLP. ARR is also inviting permanent ethics reviewers.&lt;br /&gt;
&lt;br /&gt;
The ACL exec will add more EiCs to ARR so that we can effectively distribute the load of managing each cycle as well as building out ethics reviewing, mentoring, and other ARR initiatives. The tech team will recruit a co-CTO.&lt;br /&gt;
&lt;br /&gt;
== Concerns and Challenges ==&lt;br /&gt;
&lt;br /&gt;
ARR is a paper-centric reviewing service, independent of any conference. It is not our job to provide acceptance/rejection decisions. Many authors feel unsure of this new process and understandably so. AEs and reviewers are also new to this rolling process and are concerned about their ongoing loads. Conference chairs are unsure of the new process and opt for hybrid mode instead of supporting ARR fully, again understandably so. However, overall this means we are constantly addressing concerns from multiple interests, some in conflict with each other (for example, authors who want immediate, in-depth reviews and reviewers who want low loads and lots of time to review). &lt;br /&gt;
&lt;br /&gt;
Moving to a new submission and review infrastructure (OpenReview) has proved challenging. The OpenReview team are very collegial and helpful, but the structure of ARR is unusual for OpenReview. It lacked many of the features and functionalities we needed to implement ARR, including some of the basic automations that are needed such as reminder emails,  automatic checking of resubmissions, etc. Consequently, our tech team has worked very hard to implement the various initiatives outlined above while we were rolling out ARR, including but not limited to AE assignment, reviewer assignment, etc. and to provide consistent conflict of interest detection and reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
In order for peer review to work, everyone must participate. Because the *ACL community skews young (see stats above), we must train junior reviewers and continue to involve more senior people in the reviewing process. To the extent that people think they can submit many papers, insist on high quality reviews, and yet provide no reviewing service themselves, any peer review system will fail. It is critical for experienced researchers to continue to participate in peer review.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75012</id>
		<title>2022Q1 Reports: ACL Rolling Review</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75012"/>
		<updated>2022-03-01T20:57:05Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== ARR: Looking Back == &lt;br /&gt;
&lt;br /&gt;
How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues. This committee consisted primarily of recent program chairs of big *ACL conferences, and was tasked with considering how to handle “the rapid growth of submissions”. The core problems were stated as following:&lt;br /&gt;
&lt;br /&gt;
* There are many submissions to major NLP conferences each year; the rate of increase at the time was exponential&lt;br /&gt;
* Low acceptance rates at major NLP conferences mean that we suspected that many submissions get resubmitted over and over&lt;br /&gt;
&lt;br /&gt;
In addition, arXiv increasingly interferes with the peer review process, with some authors complaining of bias in favor of arXiv preprints and authors complaining of the slowness of peer review and risks due to low acceptance rates at major NLP conferences.&lt;br /&gt;
&lt;br /&gt;
The committee came up with two sets of proposals: https://www.aclweb.org/portal/content/short-term-reform-proposals-acl-reviewing (June 2020) and https://www.aclweb.org/portal/content/long-term-reform-proposal-acl-reviewing (June 2020). &lt;br /&gt;
&lt;br /&gt;
To the core goal of ARR, cutting review load and increasing review consistency through R&amp;amp;R:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Venue &lt;br /&gt;
! 2020 &lt;br /&gt;
! 2021 &lt;br /&gt;
! 2022 &lt;br /&gt;
|-&lt;br /&gt;
| ACL &lt;br /&gt;
| 3429 &lt;br /&gt;
| 3350 &lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| NAACL &lt;br /&gt;
| - &lt;br /&gt;
| 1797 &lt;br /&gt;
| - &lt;br /&gt;
|-&lt;br /&gt;
| ARR &lt;br /&gt;
| - &lt;br /&gt;
| 3939 (through November) &lt;br /&gt;
| 2093 (incl December 2021) &lt;br /&gt;
|-&lt;br /&gt;
| ARR resubmissions &lt;br /&gt;
| - &lt;br /&gt;
| 221 (through November) &lt;br /&gt;
| 632 (December 2021/January 2022)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
853 submissions that would, without ARR, almost certainly have gone to new reviewers who would have had to write new reviews not knowing about the previous submissions/reviews, instead went to (in the majority of cases the same, but sometimes new) reviewers who had access to the previous reviews. It is important to stress that these numbers portray an incomplete picture of potential ARR gains, as ARR did not yet go through one complete year (that is, one full “conference season”). Once we reach the fall report, we will have a clearer picture regarding the overall reduction of the reviewing effort as we will know how many papers rejected from ACL/NAACL 2022 were re-committed to ACL-IJCNLP, EMNLP 2022 or another *ACL 2022 venue (*ACL workshops) without incurring new reviews.&lt;br /&gt;
&lt;br /&gt;
We do note that another of the short-term proposals, Findings, has been widely taken up, also contributing to reduced reviewing load as Findings papers are no longer eligible for submission to ARR nor *ACL venues. &lt;br /&gt;
&lt;br /&gt;
To the point about arXiv:&lt;br /&gt;
* ARR’s more frequent submission cycles allow authors to get feedback more quickly&lt;br /&gt;
* 860 ARR submissions to-date opted in to be hosted as anonymous preprints&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Stats == &lt;br /&gt;
&lt;br /&gt;
===  Submissions and resubmissions ===&lt;br /&gt;
&lt;br /&gt;
Since the first deadline in May 2021, ARR has run 10 cycles and received a total of more than 6000 submissions including more than 800 resubmissions. Full statistics can be found on our [website](http://stats.aclrollingreview.org/).&lt;br /&gt;
&lt;br /&gt;
===  Authors and reviewers ===&lt;br /&gt;
&lt;br /&gt;
A total of 13801 unique OpenReview profiles have been associated with ARR to date, across authors, reviewers, action editors, tech team, and program chairs.&lt;br /&gt;
&lt;br /&gt;
==== Authors, author experience and authors as reviewers ====&lt;br /&gt;
&lt;br /&gt;
11787 unique author ids are associated with ARR submissions. The distribution of number of submissions across authors is shown in the chart below (5th percentile: 1; median: 1; mean: 1.95; 95% percentile: 5;  max: 46).&lt;br /&gt;
&lt;br /&gt;
[[File:Arrq122figure1.PNG]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for authors who have provided Semantic Scholar IDs (5th percentile: 1; median: 12; mean: 33; 95% percentile: 179). As expected, authors skew “junior”, with many having 0-5 publications - and of course, authors with no publications do not yet have Semantic Scholar profiles.&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
==== Reviewers, reviewer load and reviewer experience ====&lt;br /&gt;
&lt;br /&gt;
4391 unique reviewer ids are associated with ARR cycles to date. The distribution of number of reviews completed per reviewer who completed at least one review (including reviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 6; mean: 5.47; 95% percentile: 10;  max: 17). This means that the cumulative load per reviewer across 10 months is approximately 6 papers on average.&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for reviewers who have provided Semantic Scholar IDs (5th percentile: 3; median: 20; mean: 40.26; 95% percentile: 170). Reviewers skew less junior than authors. When the ARR EiCs invite someone to review, either the person must have 5 publications with at least one in the past 5 years, or the person must be nominated by a senior action editor, action editor or EiC. However, the statistics here also include emergency reviewers, who may be more junior. &lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
==== Action editors, action editor load  and action editor experience ====&lt;br /&gt;
&lt;br /&gt;
531 unique action editor ids are associated with ARR cycles to date. The distribution of number of metareviews completed per action editor who completed at least one metareview (including metareviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 10; mean: 8.67; 95% percentile: 14;  max: 20).&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for action editors who have provided Semantic Scholar IDs (5th percentile: 5; median: 54; mean: 71.39; 95% percentile: 200). Action editors skew less junior than reviewers. Our initial pool of action editors was drawn from area chairs and senior area chairs from recent *ACL conferences plus workshop organizers in recent years, balancing as possible for diversity by geography and affiliation type (industry/academia).&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Innovations ==&lt;br /&gt;
&lt;br /&gt;
With ARR (a single system and process for reviewing) we have been able to satisfy several of the short-term proposals for improving reviewing and roll out other initiatives, including:&lt;br /&gt;
* Anonymous preprints - hosting anonymous preprints for NLP papers through OpenReview. 875 papers opted in to anonymous preprints through January 2022.&lt;br /&gt;
* Revise and resubmit - authors may revise and resubmit their papers, which we try to match to the previous reviewers (if available) unless the authors request new reviewers. 853 ARR submissions through January are resubmissions. See our stats dashboard for month-by-month breakdowns of resubmissions and requests for reviewer reassignment.&lt;br /&gt;
* Reviews as data (with Iryna Gurevych, Ilia Kuznetsov and Nils Dycke) - authors and reviewers may opt-in to provide reviews for research (see https://openreview.net/forum?id=28n-0nBPTch).&lt;br /&gt;
* Reviewer mentoring - providing mentoring to junior reviewers and training to all reviewers (see https://aclrollingreview.org/reviewertutorial).&lt;br /&gt;
* Responsible NLP research (with the NAACL program and reproducibility chairs) - providing a responsible NLP research checklist for authors, and training about ethics and reproducibility in NLP research (see https://aclrollingreview.org/responsibleNLPresearch/). Since December, all ARR submissions have completed the checklist.&lt;br /&gt;
* Ethics review (with the ACL ethics committee) - providing guidelines for ethics reviewing (https://aclrollingreview.org/ethicsreviewertutorial).&lt;br /&gt;
* Review statistics - providing statistics on the *ACL review process over time.&lt;br /&gt;
* Reviewer recognition (coming to the February cycle!) - providing recognition letters to reviewers about their service.&lt;br /&gt;
&lt;br /&gt;
== ARR: Looking ahead ==&lt;br /&gt;
&lt;br /&gt;
In 2022, ARR will be used by EMNLP and AACL-IJCNLP, INLG and SIGDIAL, and numerous *ACL workshops.&lt;br /&gt;
&lt;br /&gt;
Future ARR initiatives will build on the current initiatives listed above. For example, we will soon roll out:&lt;br /&gt;
* Review quality assessment - we are starting an initiative where authors and AEs rate review quality. This data will feed into reviewer mentoring and reviewer recognition (identifying reviewers whose reviews are outstanding).&lt;br /&gt;
* Submissions-over-time statistics - what happens to NLP papers? For example, how often is a typical paper resubmitted before it is committed to a venue and accepted at a venue? &lt;br /&gt;
* Improved reviewer assignment - Graham Neubig continues to explore ways to improve reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
ARR has invited permanent Senior Action Editors, each responsible for a particular area of NLP. ARR is also inviting permanent ethics reviewers.&lt;br /&gt;
&lt;br /&gt;
The ACL exec will add more EiCs to ARR so that we can effectively distribute the load of managing each cycle as well as building out ethics reviewing, mentoring, and other ARR initiatives. The tech team will recruit a co-CTO.&lt;br /&gt;
&lt;br /&gt;
== Concerns and Challenges ==&lt;br /&gt;
&lt;br /&gt;
ARR is a paper-centric reviewing service, independent of any conference. It is not our job to provide acceptance/rejection decisions. Many authors feel unsure of this new process and understandably so. AEs and reviewers are also new to this rolling process and are concerned about their ongoing loads. Conference chairs are unsure of the new process and opt for hybrid mode instead of supporting ARR fully, again understandably so. However, overall this means we are constantly addressing concerns from multiple interests, some in conflict with each other (for example, authors who want immediate, in-depth reviews and reviewers who want low loads and lots of time to review). &lt;br /&gt;
&lt;br /&gt;
Moving to a new submission and review infrastructure (OpenReview) has proved challenging. The OpenReview team are very collegial and helpful, but the structure of ARR is unusual for OpenReview. It lacked many of the features and functionalities we needed to implement ARR, including some of the basic automations that are needed such as reminder emails,  automatic checking of resubmissions, etc. Consequently, our tech team has worked very hard to implement the various initiatives outlined above while we were rolling out ARR, including but not limited to AE assignment, reviewer assignment, etc. and to provide consistent conflict of interest detection and reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
In order for peer review to work, everyone must participate. Because the *ACL community skews young (see stats above), we must train junior reviewers and continue to involve more senior people in the reviewing process. To the extent that people think they can submit many papers, insist on high quality reviews, and yet provide no reviewing service themselves, any peer review system will fail. It is critical for experienced researchers to continue to participate in peer review.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=File:Arrq122figure2.PNG&amp;diff=75011</id>
		<title>File:Arrq122figure2.PNG</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=File:Arrq122figure2.PNG&amp;diff=75011"/>
		<updated>2022-03-01T20:56:28Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=File:Arrq122figure3.PNG&amp;diff=75010</id>
		<title>File:Arrq122figure3.PNG</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=File:Arrq122figure3.PNG&amp;diff=75010"/>
		<updated>2022-03-01T20:56:16Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=File:Arrq122figure4.PNG&amp;diff=75009</id>
		<title>File:Arrq122figure4.PNG</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=File:Arrq122figure4.PNG&amp;diff=75009"/>
		<updated>2022-03-01T20:56:05Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=File:Arrq122figure5.PNG&amp;diff=75008</id>
		<title>File:Arrq122figure5.PNG</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=File:Arrq122figure5.PNG&amp;diff=75008"/>
		<updated>2022-03-01T20:55:54Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=File:Arrq122figure6.PNG&amp;diff=75007</id>
		<title>File:Arrq122figure6.PNG</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=File:Arrq122figure6.PNG&amp;diff=75007"/>
		<updated>2022-03-01T20:55:41Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75006</id>
		<title>2022Q1 Reports: ACL Rolling Review</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75006"/>
		<updated>2022-03-01T20:53:20Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== ARR: Looking Back == &lt;br /&gt;
&lt;br /&gt;
How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues. This committee consisted primarily of recent program chairs of big *ACL conferences, and was tasked with considering how to handle “the rapid growth of submissions”. The core problems were stated as following:&lt;br /&gt;
&lt;br /&gt;
* There are many submissions to major NLP conferences each year; the rate of increase at the time was exponential&lt;br /&gt;
* Low acceptance rates at major NLP conferences mean that we suspected that many submissions get resubmitted over and over&lt;br /&gt;
&lt;br /&gt;
In addition, arXiv increasingly interferes with the peer review process, with some authors complaining of bias in favor of arXiv preprints and authors complaining of the slowness of peer review and risks due to low acceptance rates at major NLP conferences.&lt;br /&gt;
&lt;br /&gt;
The committee came up with two sets of proposals: https://www.aclweb.org/portal/content/short-term-reform-proposals-acl-reviewing (June 2020) and https://www.aclweb.org/portal/content/long-term-reform-proposal-acl-reviewing (June 2020). &lt;br /&gt;
&lt;br /&gt;
To the core goal of ARR, cutting review load and increasing review consistency through R&amp;amp;R:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Venue &lt;br /&gt;
! 2020 &lt;br /&gt;
! 2021 &lt;br /&gt;
! 2022 &lt;br /&gt;
|-&lt;br /&gt;
| ACL &lt;br /&gt;
| 3429 &lt;br /&gt;
| 3350 &lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| NAACL &lt;br /&gt;
| - &lt;br /&gt;
| 1797 &lt;br /&gt;
| - &lt;br /&gt;
|-&lt;br /&gt;
| ARR &lt;br /&gt;
| - &lt;br /&gt;
| 3939 (through November) &lt;br /&gt;
| 2093 (incl December 2021) &lt;br /&gt;
|-&lt;br /&gt;
| ARR resubmissions &lt;br /&gt;
| - &lt;br /&gt;
| 221 (through November) &lt;br /&gt;
| 632 (December 2021/January 2022)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
853 submissions that would, without ARR, almost certainly have gone to new reviewers who would have had to write new reviews not knowing about the previous submissions/reviews, instead went to (in the majority of cases the same, but sometimes new) reviewers who had access to the previous reviews. It is important to stress that these numbers portray an incomplete picture of potential ARR gains, as ARR did not yet go through one complete year (that is, one full “conference season”). Once we reach the fall report, we will have a clearer picture regarding the overall reduction of the reviewing effort as we will know how many papers rejected from ACL/NAACL 2022 were re-committed to ACL-IJCNLP, EMNLP 2022 or another *ACL 2022 venue (*ACL workshops) without incurring new reviews.&lt;br /&gt;
&lt;br /&gt;
We do note that another of the short-term proposals, Findings, has been widely taken up, also contributing to reduced reviewing load as Findings papers are no longer eligible for submission to ARR nor *ACL venues. &lt;br /&gt;
&lt;br /&gt;
To the point about arXiv:&lt;br /&gt;
* ARR’s more frequent submission cycles allow authors to get feedback more quickly&lt;br /&gt;
* 860 ARR submissions to-date opted in to be hosted as anonymous preprints&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Stats == &lt;br /&gt;
&lt;br /&gt;
===  Submissions and resubmissions ===&lt;br /&gt;
&lt;br /&gt;
Since the first deadline in May 2021, ARR has run 10 cycles and received a total of more than 6000 submissions including more than 800 resubmissions. Full statistics can be found on our [website](http://stats.aclrollingreview.org/).&lt;br /&gt;
&lt;br /&gt;
===  Authors and reviewers ===&lt;br /&gt;
&lt;br /&gt;
A total of 13801 unique OpenReview profiles have been associated with ARR to date, across authors, reviewers, action editors, tech team, and program chairs.&lt;br /&gt;
&lt;br /&gt;
==== Authors, author experience and authors as reviewers ====&lt;br /&gt;
&lt;br /&gt;
11787 unique author ids are associated with ARR submissions. The distribution of number of submissions across authors is shown in the chart below (5th percentile: 1; median: 1; mean: 1.95; 95% percentile: 5;  max: 46).&lt;br /&gt;
&lt;br /&gt;
[[File:Figure1.PNG]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for authors who have provided Semantic Scholar IDs (5th percentile: 1; median: 12; mean: 33; 95% percentile: 179). As expected, authors skew “junior”, with many having 0-5 publications - and of course, authors with no publications do not yet have Semantic Scholar profiles.&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
==== Reviewers, reviewer load and reviewer experience ====&lt;br /&gt;
&lt;br /&gt;
4391 unique reviewer ids are associated with ARR cycles to date. The distribution of number of reviews completed per reviewer who completed at least one review (including reviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 6; mean: 5.47; 95% percentile: 10;  max: 17). This means that the cumulative load per reviewer across 10 months is approximately 6 papers on average.&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for reviewers who have provided Semantic Scholar IDs (5th percentile: 3; median: 20; mean: 40.26; 95% percentile: 170). Reviewers skew less junior than authors. When the ARR EiCs invite someone to review, either the person must have 5 publications with at least one in the past 5 years, or the person must be nominated by a senior action editor, action editor or EiC. However, the statistics here also include emergency reviewers, who may be more junior. &lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
==== Action editors, action editor load  and action editor experience ====&lt;br /&gt;
&lt;br /&gt;
531 unique action editor ids are associated with ARR cycles to date. The distribution of number of metareviews completed per action editor who completed at least one metareview (including metareviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 10; mean: 8.67; 95% percentile: 14;  max: 20).&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for action editors who have provided Semantic Scholar IDs (5th percentile: 5; median: 54; mean: 71.39; 95% percentile: 200). Action editors skew less junior than reviewers. Our initial pool of action editors was drawn from area chairs and senior area chairs from recent *ACL conferences plus workshop organizers in recent years, balancing as possible for diversity by geography and affiliation type (industry/academia).&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Innovations ==&lt;br /&gt;
&lt;br /&gt;
With ARR (a single system and process for reviewing) we have been able to satisfy several of the short-term proposals for improving reviewing and roll out other initiatives, including:&lt;br /&gt;
* Anonymous preprints - hosting anonymous preprints for NLP papers through OpenReview. 875 papers opted in to anonymous preprints through January 2022.&lt;br /&gt;
* Revise and resubmit - authors may revise and resubmit their papers, which we try to match to the previous reviewers (if available) unless the authors request new reviewers. 853 ARR submissions through January are resubmissions. See our stats dashboard for month-by-month breakdowns of resubmissions and requests for reviewer reassignment.&lt;br /&gt;
* Reviews as data (with Iryna Gurevych, Ilia Kuznetsov and Nils Dycke) - authors and reviewers may opt-in to provide reviews for research (see https://openreview.net/forum?id=28n-0nBPTch).&lt;br /&gt;
* Reviewer mentoring - providing mentoring to junior reviewers and training to all reviewers (see https://aclrollingreview.org/reviewertutorial).&lt;br /&gt;
* Responsible NLP research (with the NAACL program and reproducibility chairs) - providing a responsible NLP research checklist for authors, and training about ethics and reproducibility in NLP research (see https://aclrollingreview.org/responsibleNLPresearch/). Since December, all ARR submissions have completed the checklist.&lt;br /&gt;
* Ethics review (with the ACL ethics committee) - providing guidelines for ethics reviewing (https://aclrollingreview.org/ethicsreviewertutorial).&lt;br /&gt;
* Review statistics - providing statistics on the *ACL review process over time.&lt;br /&gt;
* Reviewer recognition (coming to the February cycle!) - providing recognition letters to reviewers about their service.&lt;br /&gt;
&lt;br /&gt;
== ARR: Looking ahead ==&lt;br /&gt;
&lt;br /&gt;
In 2022, ARR will be used by EMNLP and AACL-IJCNLP, INLG and SIGDIAL, and numerous *ACL workshops.&lt;br /&gt;
&lt;br /&gt;
Future ARR initiatives will build on the current initiatives listed above. For example, we will soon roll out:&lt;br /&gt;
* Review quality assessment - we are starting an initiative where authors and AEs rate review quality. This data will feed into reviewer mentoring and reviewer recognition (identifying reviewers whose reviews are outstanding).&lt;br /&gt;
* Submissions-over-time statistics - what happens to NLP papers? For example, how often is a typical paper resubmitted before it is committed to a venue and accepted at a venue? &lt;br /&gt;
* Improved reviewer assignment - Graham Neubig continues to explore ways to improve reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
ARR has invited permanent Senior Action Editors, each responsible for a particular area of NLP. ARR is also inviting permanent ethics reviewers.&lt;br /&gt;
&lt;br /&gt;
The ACL exec will add more EiCs to ARR so that we can effectively distribute the load of managing each cycle as well as building out ethics reviewing, mentoring, and other ARR initiatives. The tech team will recruit a co-CTO.&lt;br /&gt;
&lt;br /&gt;
== Concerns and Challenges ==&lt;br /&gt;
&lt;br /&gt;
ARR is a paper-centric reviewing service, independent of any conference. It is not our job to provide acceptance/rejection decisions. Many authors feel unsure of this new process and understandably so. AEs and reviewers are also new to this rolling process and are concerned about their ongoing loads. Conference chairs are unsure of the new process and opt for hybrid mode instead of supporting ARR fully, again understandably so. However, overall this means we are constantly addressing concerns from multiple interests, some in conflict with each other (for example, authors who want immediate, in-depth reviews and reviewers who want low loads and lots of time to review). &lt;br /&gt;
&lt;br /&gt;
Moving to a new submission and review infrastructure (OpenReview) has proved challenging. The OpenReview team are very collegial and helpful, but the structure of ARR is unusual for OpenReview. It lacked many of the features and functionalities we needed to implement ARR, including some of the basic automations that are needed such as reminder emails,  automatic checking of resubmissions, etc. Consequently, our tech team has worked very hard to implement the various initiatives outlined above while we were rolling out ARR, including but not limited to AE assignment, reviewer assignment, etc. and to provide consistent conflict of interest detection and reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
In order for peer review to work, everyone must participate. Because the *ACL community skews young (see stats above), we must train junior reviewers and continue to involve more senior people in the reviewing process. To the extent that people think they can submit many papers, insist on high quality reviews, and yet provide no reviewing service themselves, any peer review system will fail. It is critical for experienced researchers to continue to participate in peer review.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75005</id>
		<title>2022Q1 Reports: ACL Rolling Review</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75005"/>
		<updated>2022-03-01T20:52:25Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== ARR: Looking Back == &lt;br /&gt;
&lt;br /&gt;
How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues. This committee consisted primarily of recent program chairs of big *ACL conferences, and was tasked with considering how to handle “the rapid growth of submissions”. The core problems were stated as following:&lt;br /&gt;
&lt;br /&gt;
* There are many submissions to major NLP conferences each year; the rate of increase at the time was exponential&lt;br /&gt;
* Low acceptance rates at major NLP conferences mean that we suspected that many submissions get resubmitted over and over&lt;br /&gt;
&lt;br /&gt;
In addition, arXiv increasingly interferes with the peer review process, with some authors complaining of bias in favor of arXiv preprints and authors complaining of the slowness of peer review and risks due to low acceptance rates at major NLP conferences.&lt;br /&gt;
&lt;br /&gt;
The committee came up with two sets of proposals: https://www.aclweb.org/portal/content/short-term-reform-proposals-acl-reviewing (June 2020) and https://www.aclweb.org/portal/content/long-term-reform-proposal-acl-reviewing (June 2020). &lt;br /&gt;
&lt;br /&gt;
To the core goal of ARR, cutting review load and increasing review consistency through R&amp;amp;R:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Venue &lt;br /&gt;
! 2020 &lt;br /&gt;
! 2021 &lt;br /&gt;
! 2022 &lt;br /&gt;
|-&lt;br /&gt;
| ACL &lt;br /&gt;
| 3429 &lt;br /&gt;
| 3350 &lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| NAACL &lt;br /&gt;
| - &lt;br /&gt;
| 1797 &lt;br /&gt;
| - &lt;br /&gt;
|-&lt;br /&gt;
| ARR &lt;br /&gt;
| - &lt;br /&gt;
| 3939 (through November) &lt;br /&gt;
| 2093 (incl December 2021) &lt;br /&gt;
|-&lt;br /&gt;
| ARR resubmissions &lt;br /&gt;
| - &lt;br /&gt;
| 221 (through November) &lt;br /&gt;
| 632 (December 2021/January 2022)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
853 submissions that would, without ARR, almost certainly have gone to new reviewers who would have had to write new reviews not knowing about the previous submissions/reviews, instead went to (in the majority of cases the same, but sometimes new) reviewers who had access to the previous reviews. It is important to stress that these numbers portray an incomplete picture of potential ARR gains, as ARR did not yet go through one complete year (that is, one full “conference season”). Once we reach the fall report, we will have a clearer picture regarding the overall reduction of the reviewing effort as we will know how many papers rejected from ACL/NAACL 2022 were re-committed to ACL-IJCNLP, EMNLP 2022 or another *ACL 2022 venue (*ACL workshops) without incurring new reviews.&lt;br /&gt;
&lt;br /&gt;
We do note that another of the short-term proposals, Findings, has been widely taken up, also contributing to reduced reviewing load as Findings papers are no longer eligible for submission to ARR nor *ACL venues. &lt;br /&gt;
&lt;br /&gt;
To the point about arXiv:&lt;br /&gt;
* ARR’s more frequent submission cycles allow authors to get feedback more quickly&lt;br /&gt;
* 860 ARR submissions to-date opted in to be hosted as anonymous preprints&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Stats == &lt;br /&gt;
&lt;br /&gt;
===  Submissions and resubmissions ===&lt;br /&gt;
&lt;br /&gt;
Since the first deadline in May 2021, ARR has run 10 cycles and received a total of more than 6000 submissions including more than 800 resubmissions. Full statistics can be found on our [website](http://stats.aclrollingreview.org/).&lt;br /&gt;
&lt;br /&gt;
===  Authors and reviewers ===&lt;br /&gt;
&lt;br /&gt;
A total of 13801 unique OpenReview profiles have been associated with ARR to date, across authors, reviewers, action editors, tech team, and program chairs.&lt;br /&gt;
&lt;br /&gt;
==== Authors, author experience and authors as reviewers ====&lt;br /&gt;
&lt;br /&gt;
11787 unique author ids are associated with ARR submissions. The distribution of number of submissions across authors is shown in the chart below (5th percentile: 1; median: 1; mean: 1.95; 95% percentile: 5;  max: 46).&lt;br /&gt;
&lt;br /&gt;
[[File:figure1.png]]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for authors who have provided Semantic Scholar IDs (5th percentile: 1; median: 12; mean: 33; 95% percentile: 179). As expected, authors skew “junior”, with many having 0-5 publications - and of course, authors with no publications do not yet have Semantic Scholar profiles.&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
==== Reviewers, reviewer load and reviewer experience ====&lt;br /&gt;
&lt;br /&gt;
4391 unique reviewer ids are associated with ARR cycles to date. The distribution of number of reviews completed per reviewer who completed at least one review (including reviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 6; mean: 5.47; 95% percentile: 10;  max: 17). This means that the cumulative load per reviewer across 10 months is approximately 6 papers on average.&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for reviewers who have provided Semantic Scholar IDs (5th percentile: 3; median: 20; mean: 40.26; 95% percentile: 170). Reviewers skew less junior than authors. When the ARR EiCs invite someone to review, either the person must have 5 publications with at least one in the past 5 years, or the person must be nominated by a senior action editor, action editor or EiC. However, the statistics here also include emergency reviewers, who may be more junior. &lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
==== Action editors, action editor load  and action editor experience ====&lt;br /&gt;
&lt;br /&gt;
531 unique action editor ids are associated with ARR cycles to date. The distribution of number of metareviews completed per action editor who completed at least one metareview (including metareviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 10; mean: 8.67; 95% percentile: 14;  max: 20).&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for action editors who have provided Semantic Scholar IDs (5th percentile: 5; median: 54; mean: 71.39; 95% percentile: 200). Action editors skew less junior than reviewers. Our initial pool of action editors was drawn from area chairs and senior area chairs from recent *ACL conferences plus workshop organizers in recent years, balancing as possible for diversity by geography and affiliation type (industry/academia).&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Innovations ==&lt;br /&gt;
&lt;br /&gt;
With ARR (a single system and process for reviewing) we have been able to satisfy several of the short-term proposals for improving reviewing and roll out other initiatives, including:&lt;br /&gt;
* Anonymous preprints - hosting anonymous preprints for NLP papers through OpenReview. 875 papers opted in to anonymous preprints through January 2022.&lt;br /&gt;
* Revise and resubmit - authors may revise and resubmit their papers, which we try to match to the previous reviewers (if available) unless the authors request new reviewers. 853 ARR submissions through January are resubmissions. See our stats dashboard for month-by-month breakdowns of resubmissions and requests for reviewer reassignment.&lt;br /&gt;
* Reviews as data (with Iryna Gurevych, Ilia Kuznetsov and Nils Dycke) - authors and reviewers may opt-in to provide reviews for research (see https://openreview.net/forum?id=28n-0nBPTch).&lt;br /&gt;
* Reviewer mentoring - providing mentoring to junior reviewers and training to all reviewers (see https://aclrollingreview.org/reviewertutorial).&lt;br /&gt;
* Responsible NLP research (with the NAACL program and reproducibility chairs) - providing a responsible NLP research checklist for authors, and training about ethics and reproducibility in NLP research (see https://aclrollingreview.org/responsibleNLPresearch/). Since December, all ARR submissions have completed the checklist.&lt;br /&gt;
* Ethics review (with the ACL ethics committee) - providing guidelines for ethics reviewing (https://aclrollingreview.org/ethicsreviewertutorial).&lt;br /&gt;
* Review statistics - providing statistics on the *ACL review process over time.&lt;br /&gt;
* Reviewer recognition (coming to the February cycle!) - providing recognition letters to reviewers about their service.&lt;br /&gt;
&lt;br /&gt;
== ARR: Looking ahead ==&lt;br /&gt;
&lt;br /&gt;
In 2022, ARR will be used by EMNLP and AACL-IJCNLP, INLG and SIGDIAL, and numerous *ACL workshops.&lt;br /&gt;
&lt;br /&gt;
Future ARR initiatives will build on the current initiatives listed above. For example, we will soon roll out:&lt;br /&gt;
* Review quality assessment - we are starting an initiative where authors and AEs rate review quality. This data will feed into reviewer mentoring and reviewer recognition (identifying reviewers whose reviews are outstanding).&lt;br /&gt;
* Submissions-over-time statistics - what happens to NLP papers? For example, how often is a typical paper resubmitted before it is committed to a venue and accepted at a venue? &lt;br /&gt;
* Improved reviewer assignment - Graham Neubig continues to explore ways to improve reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
ARR has invited permanent Senior Action Editors, each responsible for a particular area of NLP. ARR is also inviting permanent ethics reviewers.&lt;br /&gt;
&lt;br /&gt;
The ACL exec will add more EiCs to ARR so that we can effectively distribute the load of managing each cycle as well as building out ethics reviewing, mentoring, and other ARR initiatives. The tech team will recruit a co-CTO.&lt;br /&gt;
&lt;br /&gt;
== Concerns and Challenges ==&lt;br /&gt;
&lt;br /&gt;
ARR is a paper-centric reviewing service, independent of any conference. It is not our job to provide acceptance/rejection decisions. Many authors feel unsure of this new process and understandably so. AEs and reviewers are also new to this rolling process and are concerned about their ongoing loads. Conference chairs are unsure of the new process and opt for hybrid mode instead of supporting ARR fully, again understandably so. However, overall this means we are constantly addressing concerns from multiple interests, some in conflict with each other (for example, authors who want immediate, in-depth reviews and reviewers who want low loads and lots of time to review). &lt;br /&gt;
&lt;br /&gt;
Moving to a new submission and review infrastructure (OpenReview) has proved challenging. The OpenReview team are very collegial and helpful, but the structure of ARR is unusual for OpenReview. It lacked many of the features and functionalities we needed to implement ARR, including some of the basic automations that are needed such as reminder emails,  automatic checking of resubmissions, etc. Consequently, our tech team has worked very hard to implement the various initiatives outlined above while we were rolling out ARR, including but not limited to AE assignment, reviewer assignment, etc. and to provide consistent conflict of interest detection and reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
In order for peer review to work, everyone must participate. Because the *ACL community skews young (see stats above), we must train junior reviewers and continue to involve more senior people in the reviewing process. To the extent that people think they can submit many papers, insist on high quality reviews, and yet provide no reviewing service themselves, any peer review system will fail. It is critical for experienced researchers to continue to participate in peer review.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=File:Figure1.PNG&amp;diff=75004</id>
		<title>File:Figure1.PNG</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=File:Figure1.PNG&amp;diff=75004"/>
		<updated>2022-03-01T20:51:42Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75003</id>
		<title>2022Q1 Reports: ACL Rolling Review</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75003"/>
		<updated>2022-03-01T20:47:07Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== ARR: Looking Back == &lt;br /&gt;
&lt;br /&gt;
How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues. This committee consisted primarily of recent program chairs of big *ACL conferences, and was tasked with considering how to handle “the rapid growth of submissions”. The core problems were stated as following:&lt;br /&gt;
&lt;br /&gt;
* There are many submissions to major NLP conferences each year; the rate of increase at the time was exponential&lt;br /&gt;
* Low acceptance rates at major NLP conferences mean that we suspected that many submissions get resubmitted over and over&lt;br /&gt;
&lt;br /&gt;
In addition, arXiv increasingly interferes with the peer review process, with some authors complaining of bias in favor of arXiv preprints and authors complaining of the slowness of peer review and risks due to low acceptance rates at major NLP conferences.&lt;br /&gt;
&lt;br /&gt;
The committee came up with two sets of proposals: https://www.aclweb.org/portal/content/short-term-reform-proposals-acl-reviewing (June 2020) and https://www.aclweb.org/portal/content/long-term-reform-proposal-acl-reviewing (June 2020). &lt;br /&gt;
&lt;br /&gt;
To the core goal of ARR, cutting review load and increasing review consistency through R&amp;amp;R:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot;| Venue &lt;br /&gt;
! 2020 &lt;br /&gt;
! 2021 &lt;br /&gt;
! 2022 &lt;br /&gt;
|-&lt;br /&gt;
| ACL &lt;br /&gt;
| 3429 &lt;br /&gt;
| 3350 &lt;br /&gt;
| -&lt;br /&gt;
|-&lt;br /&gt;
| NAACL &lt;br /&gt;
| - &lt;br /&gt;
| 1797 &lt;br /&gt;
| - &lt;br /&gt;
|-&lt;br /&gt;
| ARR &lt;br /&gt;
| - &lt;br /&gt;
| 3939 (through November) &lt;br /&gt;
| 2093 (incl December 2021) &lt;br /&gt;
|-&lt;br /&gt;
| ARR resubmissions &lt;br /&gt;
| - &lt;br /&gt;
| 221 (through November) &lt;br /&gt;
| 632 (December 2021/January 2022)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
853 submissions that would, without ARR, almost certainly have gone to new reviewers who would have had to write new reviews not knowing about the previous submissions/reviews, instead went to (in the majority of cases the same, but sometimes new) reviewers who had access to the previous reviews. It is important to stress that these numbers portray an incomplete picture of potential ARR gains, as ARR did not yet go through one complete year (that is, one full “conference season”). Once we reach the fall report, we will have a clearer picture regarding the overall reduction of the reviewing effort as we will know how many papers rejected from ACL/NAACL 2022 were re-committed to ACL-IJCNLP, EMNLP 2022 or another *ACL 2022 venue (*ACL workshops) without incurring new reviews.&lt;br /&gt;
&lt;br /&gt;
We do note that another of the short-term proposals, Findings, has been widely taken up, also contributing to reduced reviewing load as Findings papers are no longer eligible for submission to ARR nor *ACL venues. &lt;br /&gt;
&lt;br /&gt;
To the point about arXiv:&lt;br /&gt;
* ARR’s more frequent submission cycles allow authors to get feedback more quickly&lt;br /&gt;
* 860 ARR submissions to-date opted in to be hosted as anonymous preprints&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Stats == &lt;br /&gt;
&lt;br /&gt;
===  Submissions and resubmissions ===&lt;br /&gt;
&lt;br /&gt;
Since the first deadline in May 2021, ARR has run 10 cycles and received a total of more than 6000 submissions including more than 800 resubmissions. Full statistics can be found on our [website](http://stats.aclrollingreview.org/).&lt;br /&gt;
&lt;br /&gt;
===  Authors and reviewers ===&lt;br /&gt;
&lt;br /&gt;
A total of 13801 unique OpenReview profiles have been associated with ARR to date, across authors, reviewers, action editors, tech team, and program chairs.&lt;br /&gt;
&lt;br /&gt;
==== Authors, author experience and authors as reviewers ====&lt;br /&gt;
&lt;br /&gt;
11787 unique author ids are associated with ARR submissions. The distribution of number of submissions across authors is shown in the chart below (5th percentile: 1; median: 1; mean: 1.95; 95% percentile: 5;  max: 46).&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for authors who have provided Semantic Scholar IDs (5th percentile: 1; median: 12; mean: 33; 95% percentile: 179). As expected, authors skew “junior”, with many having 0-5 publications - and of course, authors with no publications do not yet have Semantic Scholar profiles.&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
==== Reviewers, reviewer load and reviewer experience ====&lt;br /&gt;
&lt;br /&gt;
4391 unique reviewer ids are associated with ARR cycles to date. The distribution of number of reviews completed per reviewer who completed at least one review (including reviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 6; mean: 5.47; 95% percentile: 10;  max: 17). This means that the cumulative load per reviewer across 10 months is approximately 6 papers on average.&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for reviewers who have provided Semantic Scholar IDs (5th percentile: 3; median: 20; mean: 40.26; 95% percentile: 170). Reviewers skew less junior than authors. When the ARR EiCs invite someone to review, either the person must have 5 publications with at least one in the past 5 years, or the person must be nominated by a senior action editor, action editor or EiC. However, the statistics here also include emergency reviewers, who may be more junior. &lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
==== Action editors, action editor load  and action editor experience ====&lt;br /&gt;
&lt;br /&gt;
531 unique action editor ids are associated with ARR cycles to date. The distribution of number of metareviews completed per action editor who completed at least one metareview (including metareviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 10; mean: 8.67; 95% percentile: 14;  max: 20).&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for action editors who have provided Semantic Scholar IDs (5th percentile: 5; median: 54; mean: 71.39; 95% percentile: 200). Action editors skew less junior than reviewers. Our initial pool of action editors was drawn from area chairs and senior area chairs from recent *ACL conferences plus workshop organizers in recent years, balancing as possible for diversity by geography and affiliation type (industry/academia).&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
== ARR Today: Innovations ==&lt;br /&gt;
&lt;br /&gt;
With ARR (a single system and process for reviewing) we have been able to satisfy several of the short-term proposals for improving reviewing and roll out other initiatives, including:&lt;br /&gt;
* Anonymous preprints - hosting anonymous preprints for NLP papers through OpenReview. 875 papers opted in to anonymous preprints through January 2022.&lt;br /&gt;
* Revise and resubmit - authors may revise and resubmit their papers, which we try to match to the previous reviewers (if available) unless the authors request new reviewers. 853 ARR submissions through January are resubmissions. See our stats dashboard for month-by-month breakdowns of resubmissions and requests for reviewer reassignment.&lt;br /&gt;
* Reviews as data (with Iryna Gurevych, Ilia Kuznetsov and Nils Dycke) - authors and reviewers may opt-in to provide reviews for research (see https://openreview.net/forum?id=28n-0nBPTch).&lt;br /&gt;
* Reviewer mentoring - providing mentoring to junior reviewers and training to all reviewers (see https://aclrollingreview.org/reviewertutorial).&lt;br /&gt;
* Responsible NLP research (with the NAACL program and reproducibility chairs) - providing a responsible NLP research checklist for authors, and training about ethics and reproducibility in NLP research (see https://aclrollingreview.org/responsibleNLPresearch/). Since December, all ARR submissions have completed the checklist.&lt;br /&gt;
* Ethics review (with the ACL ethics committee) - providing guidelines for ethics reviewing (https://aclrollingreview.org/ethicsreviewertutorial).&lt;br /&gt;
* Review statistics - providing statistics on the *ACL review process over time.&lt;br /&gt;
* Reviewer recognition (coming to the February cycle!) - providing recognition letters to reviewers about their service.&lt;br /&gt;
&lt;br /&gt;
== ARR: Looking ahead ==&lt;br /&gt;
&lt;br /&gt;
In 2022, ARR will be used by EMNLP and AACL-IJCNLP, INLG and SIGDIAL, and numerous *ACL workshops.&lt;br /&gt;
&lt;br /&gt;
Future ARR initiatives will build on the current initiatives listed above. For example, we will soon roll out:&lt;br /&gt;
* Review quality assessment - we are starting an initiative where authors and AEs rate review quality. This data will feed into reviewer mentoring and reviewer recognition (identifying reviewers whose reviews are outstanding).&lt;br /&gt;
* Submissions-over-time statistics - what happens to NLP papers? For example, how often is a typical paper resubmitted before it is committed to a venue and accepted at a venue? &lt;br /&gt;
* Improved reviewer assignment - Graham Neubig continues to explore ways to improve reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
ARR has invited permanent Senior Action Editors, each responsible for a particular area of NLP. ARR is also inviting permanent ethics reviewers.&lt;br /&gt;
&lt;br /&gt;
The ACL exec will add more EiCs to ARR so that we can effectively distribute the load of managing each cycle as well as building out ethics reviewing, mentoring, and other ARR initiatives. The tech team will recruit a co-CTO.&lt;br /&gt;
&lt;br /&gt;
== Concerns and Challenges ==&lt;br /&gt;
&lt;br /&gt;
ARR is a paper-centric reviewing service, independent of any conference. It is not our job to provide acceptance/rejection decisions. Many authors feel unsure of this new process and understandably so. AEs and reviewers are also new to this rolling process and are concerned about their ongoing loads. Conference chairs are unsure of the new process and opt for hybrid mode instead of supporting ARR fully, again understandably so. However, overall this means we are constantly addressing concerns from multiple interests, some in conflict with each other (for example, authors who want immediate, in-depth reviews and reviewers who want low loads and lots of time to review). &lt;br /&gt;
&lt;br /&gt;
Moving to a new submission and review infrastructure (OpenReview) has proved challenging. The OpenReview team are very collegial and helpful, but the structure of ARR is unusual for OpenReview. It lacked many of the features and functionalities we needed to implement ARR, including some of the basic automations that are needed such as reminder emails,  automatic checking of resubmissions, etc. Consequently, our tech team has worked very hard to implement the various initiatives outlined above while we were rolling out ARR, including but not limited to AE assignment, reviewer assignment, etc. and to provide consistent conflict of interest detection and reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
In order for peer review to work, everyone must participate. Because the *ACL community skews young (see stats above), we must train junior reviewers and continue to involve more senior people in the reviewing process. To the extent that people think they can submit many papers, insist on high quality reviews, and yet provide no reviewing service themselves, any peer review system will fail. It is critical for experienced researchers to continue to participate in peer review.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75002</id>
		<title>2022Q1 Reports: ACL Rolling Review</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports:_ACL_Rolling_Review&amp;diff=75002"/>
		<updated>2022-03-01T20:36:17Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: Created page with &amp;quot;# ARR: Looking Back  How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues....&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;# ARR: Looking Back&lt;br /&gt;
&lt;br /&gt;
How did ARR come to exist? In the fall of 2019 and spring of 2020, the ACL Exec convened a committee to discuss the future of peer review for *ACL venues. This committee consisted primarily of recent program chairs of big *ACL conferences, and was tasked with considering how to handle “the rapid growth of submissions”. The core problems were stated as following:&lt;br /&gt;
&lt;br /&gt;
- There are many submissions to major NLP conferences each year; the rate of increase at the time was exponential&lt;br /&gt;
- Low acceptance rates at major NLP conferences mean that we suspected that many submissions get resubmitted over and over&lt;br /&gt;
&lt;br /&gt;
In addition, arXiv increasingly interferes with the peer review process, with some authors complaining of bias in favor of arXiv preprints and authors complaining of the slowness of peer review and risks due to low acceptance rates at major NLP conferences.&lt;br /&gt;
&lt;br /&gt;
The committee came up with two sets of proposals: https://www.aclweb.org/portal/content/short-term-reform-proposals-acl-reviewing (June 2020) and https://www.aclweb.org/portal/content/long-term-reform-proposal-acl-reviewing (June 2020). &lt;br /&gt;
&lt;br /&gt;
To the core goal of ARR, cutting review load and increasing review consistency through R&amp;amp;R:&lt;br /&gt;
&lt;br /&gt;
| | 2020 | 2021 | 2022 |&lt;br /&gt;
|--|-----|------|------|&lt;br /&gt;
| ACL | 3429 | 3350 | |&lt;br /&gt;
| NAACL | - | 1797 | |&lt;br /&gt;
| ARR | - | 3939 (through November) | 2093 (incl December 2021) |&lt;br /&gt;
| ARR resubmissions | - | 221 (through November | 632 (December 2021/January 2022) |&lt;br /&gt;
&lt;br /&gt;
853 submissions that would, without ARR, almost certainly have gone to new reviewers who would have had to write new reviews not knowing about the previous submissions/reviews, instead went to (in the majority of cases the same, but sometimes new) reviewers who had access to the previous reviews. It is important to stress that these numbers portray an incomplete picture of potential ARR gains, as ARR did not yet go through one complete year (that is, one full “conference season”). Once we reach the fall report, we will have a clearer picture regarding the overall reduction of the reviewing effort as we will know how many papers rejected from ACL/NAACL 2022 were re-committed to ACL-IJCNLP, EMNLP 2022 or another *ACL 2022 venue (*ACL workshops) without incurring new reviews.&lt;br /&gt;
&lt;br /&gt;
We do note that another of the short-term proposals, Findings, has been widely taken up, also contributing to reduced reviewing load as Findings papers are no longer eligible for submission to ARR nor *ACL venues. &lt;br /&gt;
&lt;br /&gt;
To the point about arXiv:&lt;br /&gt;
- ARR’s more frequent submission cycles allow authors to get feedback more quickly&lt;br /&gt;
- 860 ARR submissions to-date opted in to be hosted as anonymous preprints&lt;br /&gt;
&lt;br /&gt;
# ARR Today: Stats&lt;br /&gt;
&lt;br /&gt;
## Submissions and resubmissions&lt;br /&gt;
&lt;br /&gt;
Since the first deadline in May 2021, ARR has run 10 cycles and received a total of more than 6000 submissions including more than 800 resubmissions. Full statistics can be found on our [website](http://stats.aclrollingreview.org/).&lt;br /&gt;
&lt;br /&gt;
## Authors and reviewers&lt;br /&gt;
&lt;br /&gt;
A total of 13801 unique OpenReview profiles have been associated with ARR to date, across authors, reviewers, action editors, tech team, and program chairs.&lt;br /&gt;
&lt;br /&gt;
### Authors, author experience and authors as reviewers&lt;br /&gt;
&lt;br /&gt;
11787 unique author ids are associated with ARR submissions. The distribution of number of submissions across authors is shown in the chart below (5th percentile: 1; median: 1; mean: 1.95; 95% percentile: 5;  max: 46).&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for authors who have provided Semantic Scholar IDs (5th percentile: 1; median: 12; mean: 33; 95% percentile: 179). As expected, authors skew “junior”, with many having 0-5 publications - and of course, authors with no publications do not yet have Semantic Scholar profiles.&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
### Reviewers, reviewer load and reviewer experience&lt;br /&gt;
&lt;br /&gt;
4391 unique reviewer ids are associated with ARR cycles to date. The distribution of number of reviews completed per reviewer who completed at least one review (including reviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 6; mean: 5.47; 95% percentile: 10;  max: 17). This means that the cumulative load per reviewer across 10 months is approximately 6 papers on average.&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for reviewers who have provided Semantic Scholar IDs (5th percentile: 3; median: 20; mean: 40.26; 95% percentile: 170). Reviewers skew less junior than authors. When the ARR EiCs invite someone to review, either the person must have 5 publications with at least one in the past 5 years, or the person must be nominated by a senior action editor, action editor or EiC. However, the statistics here also include emergency reviewers, who may be more junior. &lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
### Action editors, action editor load  and action editor experience&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
531 unique action editor ids are associated with ARR cycles to date. The distribution of number of metareviews completed per action editor who completed at least one metareview (including metareviews of resubmissions) is shown in the chart below (5th percentile: 1; median: 10; mean: 8.67; 95% percentile: 14;  max: 20).&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
We use Semantic Scholar profiles to estimate author and reviewer “seniority”. The chart below shows the number of previous publications from Semantic Scholar for action editors who have provided Semantic Scholar IDs (5th percentile: 5; median: 54; mean: 71.39; 95% percentile: 200). Action editors skew less junior than reviewers. Our initial pool of action editors was drawn from area chairs and senior area chairs from recent *ACL conferences plus workshop organizers in recent years, balancing as possible for diversity by geography and affiliation type (industry/academia).&lt;br /&gt;
&lt;br /&gt;
[figure here]&lt;br /&gt;
&lt;br /&gt;
# ARR Today: Innovations&lt;br /&gt;
—------------------------------&lt;br /&gt;
&lt;br /&gt;
With ARR (a single system and process for reviewing) we have been able to satisfy several of the short-term proposals for improving reviewing and roll out other initiatives, including:&lt;br /&gt;
- Anonymous preprints - hosting anonymous preprints for NLP papers through OpenReview. 875 papers opted in to anonymous preprints through January 2022.&lt;br /&gt;
- Revise and resubmit - authors may revise and resubmit their papers, which we try to match to the previous reviewers (if available) unless the authors request new reviewers. 853 ARR submissions through January are resubmissions. See our stats dashboard for month-by-month breakdowns of resubmissions and requests for reviewer reassignment.&lt;br /&gt;
- Reviews as data (with Iryna Gurevych, Ilia Kuznetsov and Nils Dycke) - authors and reviewers may opt-in to provide reviews for research (see https://openreview.net/forum?id=28n-0nBPTch).&lt;br /&gt;
- Reviewer mentoring - providing mentoring to junior reviewers and training to all reviewers (see https://aclrollingreview.org/reviewertutorial).&lt;br /&gt;
- Responsible NLP research (with the NAACL program and reproducibility chairs) - providing a responsible NLP research checklist for authors, and training about ethics and reproducibility in NLP research (see https://aclrollingreview.org/responsibleNLPresearch/). Since December, all ARR submissions have completed the checklist.&lt;br /&gt;
- Ethics review (with the ACL ethics committee) - providing guidelines for ethics reviewing (https://aclrollingreview.org/ethicsreviewertutorial).&lt;br /&gt;
- Review statistics - providing statistics on the *ACL review process over time.&lt;br /&gt;
- Reviewer recognition (coming to the February cycle!) - providing recognition letters to reviewers about their service.&lt;br /&gt;
&lt;br /&gt;
# ARR: Looking ahead&lt;br /&gt;
&lt;br /&gt;
In 2022, ARR will be used by EMNLP and AACL-IJCNLP, INLG and SIGDIAL, and numerous *ACL workshops.&lt;br /&gt;
&lt;br /&gt;
Future ARR initiatives will build on the current initiatives listed above. For example, we will soon roll out:&lt;br /&gt;
- Review quality assessment - we are starting an initiative where authors and AEs rate review quality. This data will feed into reviewer mentoring and reviewer recognition (identifying reviewers whose reviews are outstanding).&lt;br /&gt;
- Submissions-over-time statistics - what happens to NLP papers? For example, how often is a typical paper resubmitted before it is committed to a venue and accepted at a venue? &lt;br /&gt;
- Improved reviewer assignment - Graham Neubig continues to explore ways to improve reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
ARR has invited permanent Senior Action Editors, each responsible for a particular area of NLP. ARR is also inviting permanent ethics reviewers.&lt;br /&gt;
&lt;br /&gt;
The ACL exec will add more EiCs to ARR so that we can effectively distribute the load of managing each cycle as well as building out ethics reviewing, mentoring, and other ARR initiatives. The tech team will recruit a co-CTO.&lt;br /&gt;
&lt;br /&gt;
# Concerns and Challenges &lt;br /&gt;
&lt;br /&gt;
ARR is a paper-centric reviewing service, independent of any conference. It is not our job to provide acceptance/rejection decisions. Many authors feel unsure of this new process and understandably so. AEs and reviewers are also new to this rolling process and are concerned about their ongoing loads. Conference chairs are unsure of the new process and opt for hybrid mode instead of supporting ARR fully, again understandably so. However, overall this means we are constantly addressing concerns from multiple interests, some in conflict with each other (for example, authors who want immediate, in-depth reviews and reviewers who want low loads and lots of time to review). &lt;br /&gt;
&lt;br /&gt;
Moving to a new submission and review infrastructure (OpenReview) has proved challenging. The OpenReview team are very collegial and helpful, but the structure of ARR is unusual for OpenReview. It lacked many of the features and functionalities we needed to implement ARR, including some of the basic automations that are needed such as reminder emails,  automatic checking of resubmissions, etc. Consequently, our tech team has worked very hard to implement the various initiatives outlined above while we were rolling out ARR, including but not limited to AE assignment, reviewer assignment, etc. and to provide consistent conflict of interest detection and reviewer assignment.&lt;br /&gt;
&lt;br /&gt;
In order for peer review to work, everyone must participate. Because the *ACL community skews young (see stats above), we must train junior reviewers and continue to involve more senior people in the reviewing process. To the extent that people think they can submit many papers, insist on high quality reviews, and yet provide no reviewing service themselves, any peer review system will fail. It is critical for experienced researchers to continue to participate in peer review.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports&amp;diff=74945</id>
		<title>2022Q1 Reports</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2022Q1_Reports&amp;diff=74945"/>
		<updated>2022-02-17T15:57:22Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [[2022Q1 Agenda]] - Agenda for the Q1 teleconference&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[2022Q1 Reports: Office]] - Priscilla Rasmussen&lt;br /&gt;
* [[2022Q1 Reports: Secretary]] - Shiqi Zhao&lt;br /&gt;
* [[2022Q1 Reports: Treasurer]] - David Yarowsky&lt;br /&gt;
* [[2022Q1 Reports: Member at-large - EMEA]] - Anna Korhonen&lt;br /&gt;
* [[2022Q1 Reports: Member at-large - Asia/Pacific]] - Yusuke Miyao&lt;br /&gt;
* [[2022Q1 Reports: Member at-large - Americas]] - Mohit Bansal&lt;br /&gt;
* [[2022Q1 Reports: EACL]] - Shuly Wintner&lt;br /&gt;
* [[2022Q1 Reports: NAACL]] - Luciana Benotti&lt;br /&gt;
* [[2022Q1 Reports: AACL]] - Keh-Yih Su&lt;br /&gt;
* [[2022Q1 Reports: ACL 2022]] - Rada Mihalcea&lt;br /&gt;
* [[2022Q1 Reports: ACL 2023]] - Tim Baldwin&lt;br /&gt;
* [[2022Q1 Reports: ACL 2024]] - Iryna Gurevych&lt;br /&gt;
* [[2022Q1 Reports: ACL 2025]] - Emily M. Bender&lt;br /&gt;
* [[2022Q1 Reports: CL Journal]] - Hwee Tou Ng&lt;br /&gt;
* [[2022Q1 Reports: ACL Rolling Review]] - Amanda Stent, Goran Glavas, Sebastian Riedel, Pascale Fung&lt;br /&gt;
&lt;br /&gt;
* [[2022Q1 Reports: Information Director]] - Nitin Madnani&lt;br /&gt;
* [[2022Q1 Reports: TACL Journal]] - Brian Roark, Ani Nenkova&lt;br /&gt;
* [[2022Q1 Reports: Anthology Director]] - Matt Post&lt;br /&gt;
* [[2022Q1 Reports: Publicity Director]] - Barbara Plank&lt;br /&gt;
* [[2022Q1 Reports: PCC]] - Graeme Hirst, Donia Scott&lt;br /&gt;
* [[2022Q1 Reports: Equity Director]] - Natalie Schluter&lt;br /&gt;
* [[2022Q1 Reports: Sponsorship Director]]- Chris Callison-Burch&lt;br /&gt;
* [[2022Q1 Reports: ARR Editors in Chief]] - Pascale Fung, Sebastian Riedel, Amanda Stent&lt;br /&gt;
* [[2022Q1 Reports: Ethics Committee Co-chairs]]- Min-Yen Kan, Karën Fort, Yulia Tsvetkov&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[[2022Q1 Minutes (public version)]]&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2015Q3_Reports:_SIGDIAL&amp;diff=70861</id>
		<title>2015Q3 Reports: SIGDIAL</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2015Q3_Reports:_SIGDIAL&amp;diff=70861"/>
		<updated>2015-07-06T16:52:25Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;REPORT ON SIGdial ACTIVITIES: June 2014 to June 2015&lt;br /&gt;
&lt;br /&gt;
Amanda Stent, SIGdial President&lt;br /&gt;
&lt;br /&gt;
SIGdial is the ACL and ISCA Special Interest Group on Discourse and Dialogue. More information about SIGdial can be found on our website: http://www.sigdial.org, including an updated calendar of upcoming events, resources, and previous reports. Users can also become members from the website; membership are added to a low-volume, moderated mailing list (mainly conference and job announcements). SIGdial is fully compliant with ACL and ISCA guidelines for SIGs.&lt;br /&gt;
&lt;br /&gt;
In April 2015 SIGdial held an election. We thank Ethan Selfridge, Alexandros Papangelis and Stefan Ultes for running the election for us. The newly elected Executive Board members for SIGdial are Amanda Stent (President), Jason Williams (Vice President), and Kristiina Jokinen (Secretary/Treasurer). The Scientific Advisory Board consists of three appointed and three elected members: Barbara Di Eugenio, Maxine Eskenazi, Kazunori Komatani, Gabriel Skantze, Svetlana Stoyanchev and Michael Strube. We thank our previous board members (2013-2015): Luciana Benotti, Raquel Fernandez Rovira, Kazunori Komatani, Matthew Purver, Verena Rieser and David Traum. Additional positions are President Emeritus: Tim Paek, IUI liaison: Candace Sidner, SIG SLUD/JSAI liaison: Yasuhiro Katagiri, ISCA Liaison: Michael Picheny, AVIOS Liaison: Alexander Rudnicky, SIGSEM Liaison: Harry Bunt, SIGGEN Liaison: Verena Rieser, IEEE SLT Liaison: Fabrice Lefevre, and Mailing List Maintainer: Laurent Romary. After more than ten years, Laurent Romary will be stepping down; we thank him very much for his work, due to which the SIGdial mailing list is one of the most useful in CL.&lt;br /&gt;
&lt;br /&gt;
SIGdial has held an annual workshop on discourse and dialogue since 2000. Due to our steady growth of members, starting in 2009, the SIGDIAL venue became a conference and began to recognize Best Paper awards. SIGDIAL 2014 was held in Philadelphia, in &amp;quot;near co-location&amp;quot; with ACL 2014. In a first, SIGDIAL was held in co-location with INLG and there was a special joint SIGDIAL+INLG session as well as a joint banquet. In addition, there was one SIGDIAL special session, the second Dialog State Tracking Challenge. The general chairs were Kallirroi Georgila and Matthew Stone, and the program chairs were Helen Hastie and Ani Nenkova. The local organizer was Keelan Evanini, ably assisted by a phenomenal team from the Educational Testing Service. The mentoring chair was Svetlana Stoyanchev and the sponsorship chair was Giuseppe Di Fabbrizio. Sponsorships came from the Educational Testing Service, Microsoft Research, Amazon.com, Yahoo! Labs, Honda Research Institute, the Linguistic Data Consortium, Mitsubishi Electric Research Laboratories, the University of Pennsylvania Linguistics Department, AT&amp;amp;T Labs Research, the PARLANCE project and the SENSEI project. For SIGDIAL 2014, 67 papers were submitted (including 4 demo proposals),  30 papers and all 4 demo proposals were accepted, and 9 papers were accepted to the Dialog State Tracking Challenge special session. There were 67 pre-registered attendees to SIGDIAL alone, and an additional 28 pre-registered attendees who registered for both SIGDIAL and INLG. As is now SIGDIAL tradition, we video-recorded oral presentations with permission of the presenters; these were processed by Superlectures.com and are archived online. &lt;br /&gt;
&lt;br /&gt;
SIGdial has noticed that in years when SIGDIAL co-locates with ACL participation is lower, vs. years when SIGDIAL co-locates with INTERSPEECH (submissions by year and location: 2014/ACL/US - 67; 2013/INTERSPEECH/Europe - 117; 2012/ACL/Asia - 63; 2011/ACL/US - 76; 2010/INTERSPEECH/Asia - 97; 2009/INTERSPEECH/Europe - 103). The relatively low attendance last year (ACL, US) stands in contrast to the record attendance in the two previous years (INTERSPEECH, Asia and Europe respectively). We anticipate a bounce back in attendance for SIGDIAL 2015 (INTERSPEECH, Europe; and a record 146 submissions). We welcome the input of the ACL exec on how to draw more engagement from the ACL and discourse communities.&lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2015 will be help in Prague on September 2-4, just before INTERSPEECH 2015. The general chairs are Alexander Koller and Gabriel Skantze. The program chairs are Masahiro Araki and Carolyn Penstein-Rose. The local chair is Filip Jurcicek. The mentoring chair is Svetlana Stoyanchev and the sponsorships chair is Kristy Boyer. There is one special session, MultiLing 2015. &lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2016 will be held near INTERSPEECH 2016 in sunny LA at the Institute for Creative Technologies. The general chairs are Wolfgang Minker and Raquel Fernandez. The program chairs are Giuseppe Carenini and Ruichiro Higashinaka. The local organizers are Ron Artstein and Alesia Gainer. The mentoring chair is Pierre Lison. The sponsorships chair is Ethan Selfridge. The organizers will solicit special session proposals in early spring 2016.&lt;br /&gt;
&lt;br /&gt;
SIGdial also endorses a number of other dialogue-related workshops and events that are open to the general community. The SIGdial Endorsed events for 2015 include:&lt;br /&gt;
&lt;br /&gt;
* June 2014: The 3rd Dialog State Tracking Challenge &lt;br /&gt;
* January 2015:  IWSDS 2015&lt;br /&gt;
* September 2015: Young Researcher&#039;s Roundtable in Spoken Dialogue Systems &lt;br /&gt;
* September 2015: SEMDIAL 2015&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2015Q3_Reports:_SIGDIAL&amp;diff=70860</id>
		<title>2015Q3 Reports: SIGDIAL</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2015Q3_Reports:_SIGDIAL&amp;diff=70860"/>
		<updated>2015-07-06T16:51:47Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;REPORT ON SIGdial ACTIVITIES: June 2014 to June 2015&lt;br /&gt;
&lt;br /&gt;
Amanda Stent, SIGdial President&lt;br /&gt;
&lt;br /&gt;
SIGdial is the ACL and ISCA Special Interest Group on Discourse and Dialogue. More information about SIGdial can be found on our website: http://www.sigdial.org, including an updated calendar of upcoming events, resources, and previous reports. Users can also become members from the website; membership are added to a low-volume, moderated mailing list (mainly conference and job announcements). SIGdial is fully compliant with ACL and ISCA guidelines for SIGs.&lt;br /&gt;
&lt;br /&gt;
In April 2015 SIGdial held an election. We thank Ethan Selfridge, Alexandros Papangelis and Stefan Ultes for running the election for us. The newly elected Executive Board members for SIGdial are Amanda Stent (President), Jason Williams (Vice President), and Kristiina Jokinen (Secretary/Treasurer). The Scientific Advisory Board consists of three appointed and three elected members: Barbara Di Eugenio, Maxine Eskenazi, Kazunori Komatani, Gabriel Skantze, Svetlana Stoyanchev and Michael Strube. We thank our previous board members (2013-2015): Luciana Benotti, Raquel Fernandez Rovira, Kazunori Komatani, Matthew Purver, Verena Rieser and David Traum. Additional positions are President Emeritus: Tim Paek, IUI liaison: Candace Sidner, SIG SLUD/JSAI liaison: Yasuhiro Katagiri, ISCA Liaison: Michael Picheny, AVIOS Liaison: Alexander Rudnicky, SIGSEM Liaison: Harry Bunt, SIGGEN Liaison: Verena Rieser, IEEE SLT Liaison: Fabrice Lefevre, and Mailing List Maintainer: Laurent Romary. After more than ten years, Laurent Romary will be stepping down; we thank him very much for his work, due to which the SIGdial mailing list is one of the most useful in CL.&lt;br /&gt;
&lt;br /&gt;
SIGdial has held an annual workshop on discourse and dialogue since 2000. Due to our steady growth of members, starting in 2009, the SIGDIAL venue became a conference and began to recognize Best Paper awards. SIGDIAL 2014 was held in Philadelphia, in &amp;quot;near co-location&amp;quot; with ACL 2014. In a first, SIGDIAL was held in co-location with INLG and there was a special joint SIGDIAL+INLG session as well as a joint banquet. In addition, there was one SIGDIAL special session, the second Dialog State Tracking Challenge. The general chairs were Kallirroi Georgila and Matthew Stone, and the program chairs were Helen Hastie and Ani Nenkova. The local organizer was Keelan Evanini, ably assisted by a phenomenal team from the Educational Testing Service. The mentoring chair was Svetlana Stoyanchev and the sponsorship chair was Giuseppe Di Fabbrizio. Sponsorships came from the Educational Testing Service, Microsoft Research, Amazon.com, Yahoo! Labs, Honda Research Institute, the Linguistic Data Consortium, Mitsubishi Electric Research Laboratories, the University of Pennsylvania Linguistics Department, AT&amp;amp;T Labs Research, the PARLANCE project and the SENSEI project. For SIGDIAL 2014, 67 papers were submitted (including 4 demo proposals),  30 papers and all 4 demo proposals were accepted, and 9 papers were accepted to the Dialog State Tracking Challenge special session. There were 67 pre-registered attendees to SIGDIAL alone, and an additional 28 pre-registered attendees who registered for both SIGDIAL and INLG. As is now SIGDIAL tradition, we video-recorded oral presentations with permission of the presenters; these were processed by Superlectures.com and are archived online. &lt;br /&gt;
&lt;br /&gt;
SIGdial has noticed that in years when SIGDIAL co-locates with ACL participation is lower, vs. years when SIGDIAL co-locates with INTERSPEECH (and presumably draws more speech scientists). [submissions by year and location: 2014/ACL/US - 67; 2013/INTERSPEECH/Europe - 117; 2012/ACL/Asia - 63; 2011/ACL/US - 76; 2010/INTERSPEECH/Asia - 97; 2009/INTERSPEECH/Europe - 103] The relatively low attendance last year (ACL, US) stands in contrast to the record attendance in the two previous years (INTERSPEECH, Asia and Europe respectively). We anticipate a bounce back in attendance for SIGDIAL 2015 (INTERSPEECH, Europe; and a record 146 submissions). We welcome the input of the ACL exec on how to draw more engagement from the ACL and discourse communities.&lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2015 will be help in Prague on September 2-4, just before INTERSPEECH 2015. The general chairs are Alexander Koller and Gabriel Skantze. The program chairs are Masahiro Araki and Carolyn Penstein-Rose. The local chair is Filip Jurcicek. The mentoring chair is Svetlana Stoyanchev and the sponsorships chair is Kristy Boyer. There is one special session, MultiLing 2015. &lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2016 will be held near INTERSPEECH 2016 in sunny LA at the Institute for Creative Technologies. The general chairs are Wolfgang Minker and Raquel Fernandez. The program chairs are Giuseppe Carenini and Ruichiro Higashinaka. The local organizers are Ron Artstein and Alesia Gainer. The mentoring chair is Pierre Lison. The sponsorships chair is Ethan Selfridge. The organizers will solicit special session proposals in early spring 2016.&lt;br /&gt;
&lt;br /&gt;
SIGdial also endorses a number of other dialogue-related workshops and events that are open to the general community. The SIGdial Endorsed events for 2015 include:&lt;br /&gt;
&lt;br /&gt;
* June 2014: The 3rd Dialog State Tracking Challenge &lt;br /&gt;
* January 2015:  IWSDS 2015&lt;br /&gt;
* September 2015: Young Researcher&#039;s Roundtable in Spoken Dialogue Systems &lt;br /&gt;
* September 2015: SEMDIAL 2015&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2015Q3_Reports:_SIGDIAL&amp;diff=70815</id>
		<title>2015Q3 Reports: SIGDIAL</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2015Q3_Reports:_SIGDIAL&amp;diff=70815"/>
		<updated>2015-07-01T21:00:35Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;REPORT ON SIGdial ACTIVITIES: June 2014 to June 2015&lt;br /&gt;
&lt;br /&gt;
Amanda Stent, SIGdial President&lt;br /&gt;
&lt;br /&gt;
SIGdial is the ACL and ISCA Special Interest Group on Discourse and Dialogue. More information about SIGdial can be found on our website: http://www.sigdial.org, including an updated calendar of upcoming events, resources, and previous reports. Users can also become members from the website; membership are added to a low-volume, moderated mailing list (mainly conference and job announcements). SIGdial is fully compliant with ACL and ISCA guidelines for SIGs.&lt;br /&gt;
&lt;br /&gt;
In April 2015 SIGdial held an election. We thank Ethan Selfridge, Alexandros Papangelis and Stefan Ultes for running the election for us. The newly elected Executive Board members for SIGdial are Amanda Stent (President), Jason Williams (Vice President), and Kristiina Jokinen (Secretary/Treasurer). The Scientific Advisory Board consists of three appointed and three elected members: Barbara Di Eugenio, Maxine Eskenazi, Kazunori Komatani, Gabriel Skantze, Svetlana Stoyanchev and Michael Strube. We thank our previous board members (2013-2015): Luciana Benotti, Raquel Fernandez Rovira, Kazunori Komatani, Matthew Purver, Verena Rieser and David Traum. Additional positions are President Emeritus: Tim Paek, IUI liaison: Candace Sidner, SIG SLUD/JSAI liaison: Yasuhiro Katagiri, ISCA Liaison: Michael Picheny, AVIOS Liaison: Alexander Rudnicky, SIGSEM Liaison: Harry Bunt, SIGGEN Liaison: Verena Rieser, IEEE SLT Liaison: Fabrice Lefevre, and Mailing List Maintainer: Laurent Romary. After more than ten years, Laurent Romary will be stepping down; we thank him very much for his work, due to which the SIGdial mailing list is one of the most useful in CL.&lt;br /&gt;
&lt;br /&gt;
SIGdial has held an annual workshop on discourse and dialogue since 2000. Due to our steady growth of members, starting in 2009, the SIGDIAL venue became a conference and began to recognize Best Paper awards. SIGDIAL 2014 was held in Philadelphia, in &amp;quot;near co-location&amp;quot; with ACL 2014. In a first, SIGDIAL was held in co-location with INLG and there was a special joint SIGDIAL+INLG session as well as a joint banquet. In addition, there was one SIGDIAL special session, the second Dialog State Tracking Challenge. The general chairs were Kallirroi Georgila and Matthew Stone, and the program chairs were Helen Hastie and Ani Nenkova. The local organizer was Keelan Evanini, ably assisted by a phenomenal team from the Educational Testing Service. The mentoring chair was Svetlana Stoyanchev and the sponsorship chair was Giuseppe Di Fabbrizio. Sponsorships came from the Educational Testing Service, Microsoft Research, Amazon.com, Yahoo! Labs, Honda Research Institute, the Linguistic Data Consortium, Mitsubishi Electric Research Laboratories, the University of Pennsylvania Linguistics Department, AT&amp;amp;T Labs Research, the PARLANCE project and the SENSEI project. For SIGDIAL 2014, 67 papers were submitted (including 4 demo proposals),  30 papers and all 4 demo proposals were accepted, and 9 papers were accepted to the Dialog State Tracking Challenge special session. There were 67 pre-registered attendees to SIGDIAL alone, and an additional 28 pre-registered attendees who registered for both SIGDIAL and INLG. As is now SIGDIAL tradition, we video-recorded oral presentations with permission of the presenters; these were processed by Superlectures.com and are archived online. &lt;br /&gt;
&lt;br /&gt;
SIGdial has noticed that in years when SIGDIAL co-locates with ACL participation is lower, vs. years when SIGDIAL co-locates with INTERSPEECH (and presumably draws more speech scientists). [submissions by year and location: 2014/ACL/US - 67; 2013/INTERSPEECH/Europe - 117; 2012/ACL/Asia - 63; 2011/ACL/US - 76; 2010/INTERSPEECH/Asia - 97; 2009/INTERSPEECH/Europe - 103] The relatively low attendance last year (ACL, US) stands in contrast to the record attendance in the two previous years (INTERSPEECH, Asia and Europe respectively). We anticipate a bounce back in attendance for SIGDIAL 2015 (INTERSPEECH, Europe). We welcome the input of the ACL exec on how to draw more engagement from the ACL and discourse communities.&lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2015 will be help in Prague on September 2-4, just before INTERSPEECH 2015. The general chairs are Alexander Koller and Gabriel Skantze. The program chairs are Masahiro Araki and Carolyn Penstein-Rose. The local chair is Filip Jurcicek. The mentoring chair is Svetlana Stoyanchev and the sponsorships chair is Kristy Boyer. There is one special session, MultiLing 2015. &lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2016 will be held near INTERSPEECH 2016 in sunny LA at the Institute for Creative Technologies. The general chairs are Wolfgang Minker and Raquel Fernandez. The program chairs are Giuseppe Carenini and Ruichiro Higashinaka. The local organizers are Ron Artstein and Alesia Gainer. The mentoring chair is Pierre Lison. The sponsorships chair is Ethan Selfridge. The organizers will solicit special session proposals in early spring 2016.&lt;br /&gt;
&lt;br /&gt;
SIGdial also endorses a number of other dialogue-related workshops and events that are open to the general community. The SIGdial Endorsed events for 2015 include:&lt;br /&gt;
&lt;br /&gt;
* June 2014: The 3rd Dialog State Tracking Challenge &lt;br /&gt;
* January 2015:  IWSDS 2015&lt;br /&gt;
* September 2015: Young Researcher&#039;s Roundtable in Spoken Dialogue Systems &lt;br /&gt;
* September 2015: SEMDIAL 2015&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2015Q3_Reports:_SIGDIAL&amp;diff=70814</id>
		<title>2015Q3 Reports: SIGDIAL</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2015Q3_Reports:_SIGDIAL&amp;diff=70814"/>
		<updated>2015-07-01T21:00:05Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;REPORT ON SIGdial ACTIVITIES: June 2014 to June 2015&lt;br /&gt;
&lt;br /&gt;
Amanda Stent, SIGdial President&lt;br /&gt;
&lt;br /&gt;
SIGdial is the ACL and ISCA Special Interest Group on Discourse and Dialogue. More information about SIGdial can be found on our website: http://www.sigdial.org, including an updated calendar of upcoming events, resources, and previous reports. Users can also become members from the website; membership are added to a low-volume, moderated mailing list (mainly conference and job announcements). SIGdial is fully compliant with ACL and ISCA guidelines for SIGs.&lt;br /&gt;
&lt;br /&gt;
In April 2015 SIGdial held an election. We thank Ethan Selfridge, Alexandros Papangelis and Stefan Ultes for running the election for us. The newly elected Executive Board members for SIGdial are Amanda Stent (President), Jason Williams (Vice President), and Kristiina Jokinen (Secretary/Treasurer). The Scientific Advisory Board consists of three appointed and three elected members: Barbara Di Eugenio, Maxine Eskenazi, Kazunori Komatani, Gabriel Skantze, Svetlana Stoyanchev and Michael Strube. We thank our previous board members (2013-2015): Luciana Benotti, Raquel Fernandez Rovira, Kazunori Komatani, Matthew Purver, Verena Rieser and David Traum. Additional positions are President Emeritus: Tim Paek, IUI liaison: Candace Sidner, SIG SLUD/JSAI liaison: Yasuhiro Katagiri, ISCA Liaison: Michael Picheny, AVIOS Liaison: Alexander Rudnicky, SIGSEM Liaison: Harry Bunt, SIGGEN Liaison: Verena Rieser, IEEE SLT Liaison: Fabrice Lefevre, and Mailing List Maintainer: Laurent Romary. After more than ten years, Laurent Romary will be stepping down; we thank him very much for his work, due to which the SIGdial mailing list is one of the most useful in CL.&lt;br /&gt;
&lt;br /&gt;
SIGdial has held an annual workshop on discourse and dialogue since 2000. Due to our steady growth of members, starting in 2009, the SIGDIAL venue became a conference and began to recognize Best Paper awards. SIGDIAL 2014 was held in Philadelphia, in &amp;quot;near co-location&amp;quot; with ACL 2014. In a first, SIGDIAL was held in co-location with INLG and there was a special joint SIGDIAL+INLG session as well as a joint banquet. In addition, there was one SIGDIAL special session, the second Dialog State Tracking Challenge. The general chairs were Kallirroi Georgila and Matthew Stone, and the program chairs were Helen Hastie and Ani Nenkova. The local organizer was Keelan Evanini, ably assisted by a phenomenal team from the Educational Testing Service. The mentoring chair was Svetlana Stoyanchev and the sponsorship chair was Giuseppe Di Fabbrizio. Sponsorships came from the Educational Testing Service, Microsoft Research, Amazon.com, Yahoo! Labs, Honda Research Institute, the Linguistic Data Consortium, Mitsubishi Electric Research Laboratories, the University of Pennsylvania Linguistics Department, AT&amp;amp;T Labs Research, the PARLANCE project and the SENSEI project. For SIGDIAL 2014, 67 papers were submitted (including 4 demo proposals),  30 papers and all 4 demo proposals were accepted, and 9 papers were accepted to the Dialog State Tracking Challenge special session. There were 67 pre-registered attendees to SIGDIAL alone, and an additional 28 pre-registered attendees who registered for both SIGDIAL and INLG. As is now SIGDIAL tradition, we video-recorded oral presentations with permission of the presenters; these were processed by Superlectures.com and are archived online. &lt;br /&gt;
&lt;br /&gt;
SIGdial has noticed that in years when SIGDIAL co-locates with ACL participation is lower, vs. years when SIGDIAL co-locates with INTERSPEECH (and presumably draws more speech scientists). [submissions by year and location: 2014/ACL/US - 68; 2013/INTERSPEECH/Europe - 117; 2012/ACL/Asia - 63; 2011/ACL/US - 76; 2010/INTERSPEECH/Asia - 97; 2009/INTERSPEECH/Europe - 103] The relatively low attendance last year (ACL, US) stands in contrast to the record attendance in the two previous years (INTERSPEECH, Asia and Europe respectively). We anticipate a bounce back in attendance for SIGDIAL 2015 (INTERSPEECH, Europe). We welcome the input of the ACL exec on how to draw more engagement from the ACL and discourse communities.&lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2015 will be help in Prague on September 2-4, just before INTERSPEECH 2015. The general chairs are Alexander Koller and Gabriel Skantze. The program chairs are Masahiro Araki and Carolyn Penstein-Rose. The local chair is Filip Jurcicek. The mentoring chair is Svetlana Stoyanchev and the sponsorships chair is Kristy Boyer. There is one special session, MultiLing 2015. &lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2016 will be held near INTERSPEECH 2016 in sunny LA at the Institute for Creative Technologies. The general chairs are Wolfgang Minker and Raquel Fernandez. The program chairs are Giuseppe Carenini and Ruichiro Higashinaka. The local organizers are Ron Artstein and Alesia Gainer. The mentoring chair is Pierre Lison. The sponsorships chair is Ethan Selfridge. The organizers will solicit special session proposals in early spring 2016.&lt;br /&gt;
&lt;br /&gt;
SIGdial also endorses a number of other dialogue-related workshops and events that are open to the general community. The SIGdial Endorsed events for 2015 include:&lt;br /&gt;
&lt;br /&gt;
* June 2014: The 3rd Dialog State Tracking Challenge &lt;br /&gt;
* January 2015:  IWSDS 2015&lt;br /&gt;
* September 2015: Young Researcher&#039;s Roundtable in Spoken Dialogue Systems &lt;br /&gt;
* September 2015: SEMDIAL 2015&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2015Q3_Reports:_SIGDIAL&amp;diff=70813</id>
		<title>2015Q3 Reports: SIGDIAL</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2015Q3_Reports:_SIGDIAL&amp;diff=70813"/>
		<updated>2015-07-01T20:52:55Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;REPORT ON SIGdial ACTIVITIES: June 2014 to June 2015&lt;br /&gt;
&lt;br /&gt;
Amanda Stent, SIGdial President&lt;br /&gt;
&lt;br /&gt;
SIGdial is the ACL and ISCA Special Interest Group on Discourse and Dialogue. More information about SIGdial can be found on our website: http://www.sigdial.org, including an updated calendar of upcoming events, resources, and previous reports. Users can also become members from the website; membership are added to a low-volume, moderated mailing list (mainly conference and job announcements). SIGdial is fully compliant with ACL and ISCA guidelines for SIGs.&lt;br /&gt;
&lt;br /&gt;
In April 2015 SIGdial held an election. We thank Ethan Selfridge, Alexandros Papangelis and Stefan Ultes for running the election for us. The newly elected Executive Board members for SIGdial are Amanda Stent (President), Jason Williams (Vice President), and Kristiina Jokinen (Secretary/Treasurer). The Scientific Advisory Board consists of three appointed and three elected members: Barbara Di Eugenio, Maxine Eskenazi, Kazunori Komatani, Gabriel Skantze, Svetlana Stoyanchev and Michael Strube. We thank our previous board members (2013-2015): Luciana Benotti, Raquel Fernandez Rovira, Kazunori Komatani, Matthew Purver, Verena Rieser and David Traum. Additional positions are President Emeritus: Tim Paek, IUI liaison: Candace Sidner, SIG SLUD/JSAI liaison: Yasuhiro Katagiri, ISCA Liaison: Michael Picheny, AVIOS Liaison: Alexander Rudnicky, SIGSEM Liaison: Harry Bunt, SIGGEN Liaison: Verena Rieser, IEEE SLT Liaison: Fabrice Lefevre, and Mailing List Maintainer: Laurent Romary. After more than ten years, Laurent Romary will be stepping down; we thank him very much for his work, due to which the SIGdial mailing list is one of the most useful in CL.&lt;br /&gt;
&lt;br /&gt;
SIGdial has held an annual workshop on discourse and dialogue since 2000. Due to our steady growth of members, starting in 2009, the SIGDIAL venue became a conference and began to recognize Best Paper awards. SIGDIAL 2014 was held in Philadelphia, in &amp;quot;near co-location&amp;quot; with ACL 2014. In a first, SIGDIAL was held in co-location with INLG and there was a special joint SIGDIAL+INLG session as well as a joint banquet. In addition, there was one SIGDIAL special session, the second Dialog State Tracking Challenge. The general chairs were Kallirroi Georgila and Matthew Stone, and the program chairs were Helen Hastie and Ani Nenkova. The local organizer was Keelan Evanini, ably assisted by a phenomenal team from the Educational Testing Service. The mentoring chair was Svetlana Stoyanchev and the sponsorship chair was Giuseppe Di Fabbrizio. Sponsorships came from the Educational Testing Service, Microsoft Research, Amazon.com, Yahoo! Labs, Honda Research Institute, the Linguistic Data Consortium, Mitsubishi Electric Research Laboratories, the University of Pennsylvania Linguistics Department, AT&amp;amp;T Labs Research, the PARLANCE project and the SENSEI project. For SIGDIAL 2014, 67 papers were submitted (including 4 demo proposals),  30 papers and all 4 demo proposals were accepted, and 9 papers were accepted to the Dialog State Tracking Challenge special session. There were 67 pre-registered attendees to SIGDIAL alone, and an additional 28 pre-registered attendees who registered for both SIGDIAL and INLG. As is now SIGDIAL tradition, we video-recorded oral presentations with permission of the presenters; these were processed by Superlectures.com and are archived online. &lt;br /&gt;
&lt;br /&gt;
SIGdial has noticed that in years when SIGDIAL co-locates with ACL participation is lower (and presumably draws more discourse people), vs. years when SIGDIAL co-locates with INTERSPEECH (and presumably draws more speech scientists). [submissions by year and location: 2014/ACL/US - 68; 2013/INTERSPEECH/Europe - 117; 2012/ACL/Asia - 63; 2011/ACL/US - 76; 2010/INTERSPEECH/Asia - 97; 2009/INTERSPEECH/Europe - 103] The relatively low attendance last year (ACL, US) stands in contrast to the record attendance in the two previous years (INTERSPEECH, Asia and Europe respectively). We anticipate a bounce back in attendance for SIGDIAL 2015 (INTERSPEECH, Europe). We welcome the input of the ACL exec on how to draw more engagement from the ACL and discourse communities.&lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2015 will be help in Prague on September 2-4, just before INTERSPEECH 2015. The general chairs are Alexander Koller and Gabriel Skantze. The program chairs are Masahiro Araki and Carolyn Penstein-Rose. The local chair is Filip Jurcicek. The mentoring chair is Svetlana Stoyanchev and the sponsorships chair is Kristy Boyer. There is one special session, MultiLing 2015. &lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2016 will be held near INTERSPEECH 2016 in sunny LA at the Institute for Creative Technologies. The general chairs are Wolfgang Minker and Raquel Fernandez. The program chairs are Giuseppe Carenini and Ruichiro Higashinaka. The local organizers are Ron Artstein and Alesia Gainer. The mentoring chair is Pierre Lison. The sponsorships chair is Ethan Selfridge. The organizers will solicit special session proposals in early spring 2016.&lt;br /&gt;
&lt;br /&gt;
SIGdial also endorses a number of other dialogue-related workshops and events that are open to the general community. The SIGdial Endorsed events for 2015 include:&lt;br /&gt;
&lt;br /&gt;
* June 2014: The 3rd Dialog State Tracking Challenge &lt;br /&gt;
* January 2015:  IWSDS 2015&lt;br /&gt;
* September 2015: Young Researcher&#039;s Roundtable in Spoken Dialogue Systems &lt;br /&gt;
* September 2015: SEMDIAL 2015&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2015Q3_Reports:_SIGDIAL&amp;diff=70805</id>
		<title>2015Q3 Reports: SIGDIAL</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2015Q3_Reports:_SIGDIAL&amp;diff=70805"/>
		<updated>2015-07-01T15:24:14Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;REPORT ON SIGdial ACTIVITIES: June 2014 to June 2015&lt;br /&gt;
&lt;br /&gt;
Amanda Stent, SIGdial President&lt;br /&gt;
&lt;br /&gt;
SIGdial is the ACL and ISCA Special Interest Group on Discourse and Dialogue. More information about SIGdial can be found on our website: http://www.sigdial.org, including an updated calendar of upcoming events, resources, and previous reports. Users can also become members from the website; membership are added to a low-volume, moderated mailing list (mainly conference and job announcements). SIGdial is fully compliant with ACL and ISCA guidelines for SIGs.&lt;br /&gt;
&lt;br /&gt;
In April 2015 SIGdial held an election. We thank Ethan Selfridge, Alexandros Papangelis and Stefan Ultes for running the election for us. The newly elected Executive Board members for SIGdial are Amanda Stent (President), Jason Williams (Vice President), and Kristiina Jokinen (Secretary/Treasurer). The Scientific Advisory Board consists of three appointed and three elected members: Barbara Di Eugenio, Maxine Eskenazi, Kazunori Komatani, Gabriel Skantze, Svetlana Stoyanchev and Michael Strube. We thank our previous board members (2013-2015): Luciana Benotti, Raquel Fernandez Rovira, Kazunori Komatani, Matthew Purver, Verena Rieser and David Traum. Additional positions are President Emeritus: Tim Paek, IUI liaison: Candace Sidner, SIG SLUD/JSAI liaison: Yasuhiro Katagiri, ISCA Liaison: Michael Picheny, AVIOS Liaison: Alexander Rudnicky, SIGSEM Liaison: Harry Bunt, SIGGEN Liaison: Verena Rieser, IEEE SLT Liaison: Fabrice Lefevre, and Mailing List Maintainer: Laurent Romary. After more than ten years, Laurent Romary will be stepping down; we thank him very much for his work, due to which the SIGdial mailing list is one of the most useful in CL.&lt;br /&gt;
&lt;br /&gt;
SIGdial has held an annual workshop on discourse and dialogue since 2000. Due to our steady growth of members, starting in 2009, the SIGDIAL venue became a conference and began to recognize Best Paper awards. SIGDIAL 2014 was held in Philadelphia, in &amp;quot;near co-location&amp;quot; with ACL 2014. In a first, SIGDIAL was held in co-location with INLG and there was a special joint SIGDIAL+INLG session as well as a joint banquet. In addition, there was one SIGDIAL special session, the second Dialog State Tracking Challenge. The general chairs were Kallirroi Georgila and Matthew Stone, and the program chairs were Helen Hastie and Ani Nenkova. The local organizer was Keelan Evanini, ably assisted by a phenomenal team from the Educational Testing Service. The mentoring chair was Svetlana Stoyanchev and the sponsorship chair was Giuseppe Di Fabbrizio. Sponsorships came from the Educational Testing Service, Microsoft Research, Amazon.com, Yahoo! Labs, Honda Research Institute, the Linguistic Data Consortium, Mitsubishi Electric Research Laboratories, the University of Pennsylvania Linguistics Department, AT&amp;amp;T Labs Research, the PARLANCE project and the SENSEI project. For SIGDIAL 2014, 67 papers were submitted (including 4 demo proposals),  30 papers and all 4 demo proposals were accepted, and 9 papers were accepted to the Dialog State Tracking Challenge special session. There were 67 pre-registered attendees to SIGDIAL alone, and an additional 28 pre-registered attendees who registered for both SIGDIAL and INLG. As is now SIGDIAL tradition, we video-recorded oral presentations with permission of the presenters; these were processed by Superlectures.com and are archived online. &lt;br /&gt;
&lt;br /&gt;
SIGdial has noticed that in years when SIGDIAL co-locates with ACL attendance is lower (and presumably draws more discourse people), vs. years when SIGDIAL co-locates with INTERSPEECH (and presumably draws more speech scientists). Also, attendance is lower in years when SIGDIAL is in the US. The relatively low attendance last year (ACL/US) stands in contrast to the record attendance in the two previous years (INTERSPEECH, Asia and Europe respectively). We anticipate a bounce back in attendance for SIGDIAL 2015 (INTERSPEECH, Europe). We welcome the input of the ACL exec on how to draw more engagement from the ACL and discourse communities.&lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2015 will be help in Prague on September 2-4, just before INTERSPEECH 2015. The general chairs are Alexander Koller and Gabriel Skantze. The program chairs are Masahiro Araki and Carolyn Penstein-Rose. The local chair is Filip Jurcicek. The mentoring chair is Svetlana Stoyanchev and the sponsorships chair is Kristy Boyer. There is one special session, MultiLing 2015. &lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2016 will be held near INTERSPEECH 2016 in sunny LA at the Institute for Creative Technologies. The general chairs are Wolfgang Minker and Raquel Fernandez. The program chairs are Giuseppe Carenini and Ruichiro Higashinaka. The local organizers are Ron Artstein and Alesia Gainer. The mentoring chair is Pierre Lison. The sponsorships chair is Ethan Selfridge. The organizers will solicit special session proposals in early spring 2016.&lt;br /&gt;
&lt;br /&gt;
SIGdial also endorses a number of other dialogue-related workshops and events that are open to the general community. The SIGdial Endorsed events for 2015 include:&lt;br /&gt;
&lt;br /&gt;
* June 2014: The 3rd Dialog State Tracking Challenge &lt;br /&gt;
* January 2015:  IWSDS 2015&lt;br /&gt;
* September 2015: Young Researcher&#039;s Roundtable in Spoken Dialogue Systems &lt;br /&gt;
* September 2015: SEMDIAL 2015&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2015Q3_Reports:_SIGDIAL&amp;diff=70780</id>
		<title>2015Q3 Reports: SIGDIAL</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2015Q3_Reports:_SIGDIAL&amp;diff=70780"/>
		<updated>2015-06-30T17:44:23Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: Created page with &amp;quot;REPORT ON SIGdial ACTIVITIES: June 2014 to June 2015  Amanda Stent, SIGdial President  SIGdial is the ACL and ISCA Special Interest Group on Discourse and Dialogue. More infor...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;REPORT ON SIGdial ACTIVITIES: June 2014 to June 2015&lt;br /&gt;
&lt;br /&gt;
Amanda Stent, SIGdial President&lt;br /&gt;
&lt;br /&gt;
SIGdial is the ACL and ISCA Special Interest Group on Discourse and Dialogue. More information about SIGdial can be found on our website: http://www.sigdial.org, including an updated calendar of upcoming events, resources, and previous reports. Users can also become members from the website; membership are added to a low-volume, moderated mailing list (mainly conference and job announcements). SIGdial is fully compliant with ACL and ISCA guidelines for SIGs.&lt;br /&gt;
&lt;br /&gt;
In April 2015 SIGdial held an election. We thank Ethan Selfridge, Alexandros Papangelis and Stefan Ultes for running the election for us. The newly elected Executive Board members for SIGdial are Amanda Stent (President), Jason Williams (Vice President), and Kristiina Jokinen (Secretary/Treasurer). The Scientific Advisory Board consists of three appointed and three elected members: Barbara Di Eugenio, Maxine Eskenazi, Kazunori Komatani, Gabriel Skantze, Svetlana Stoyanchev and Michael Strube. We thank our previous board members (2013-2015): Luciana Benotti, Raquel Fernandez Rovira, Kazunori Komatani, Matthew Purver, Verena Rieser and David Traum. Additional positions are President Emeritus: Tim Paek, IUI liaison: Candace Sidner, SIG SLUD/JSAI liaison: Yasuhiro Katagiri, ISCA Liaison: Michael Picheny, AVIOS Liaison: Alexander Rudnicky, SIGSEM Liaison: Harry Bunt, SIGGEN Liaison: Verena Rieser, IEEE SLT Liaison: Fabrice Lefevre, and Mailing List Maintainer: Laurent Romary. After more than ten years, Laurent Romary will be stepping down; we thank him very much for his work, due to which the SIGdial mailing list is one of the most useful in CL.&lt;br /&gt;
&lt;br /&gt;
SIGdial has held an annual workshop on discourse and dialogue since 2000. Due to our steady growth of members, starting in 2009, the SIGDIAL venue became a conference and began to recognize Best Paper awards. SIGDIAL 2014 was held in Philadelphia, in &amp;quot;near co-location&amp;quot; with ACL 2014. In a first, SIGDIAL was held in co-location with INLG and there was a special joint SIGDIAL+INLG session as well as a joint banquet. In addition, there was one SIGDIAL special session, the second Dialog State Tracking Challenge. The general chairs were Kallirroi Georgila and Matthew Stone, and the program chairs were Helen Hastie and Ani Nenkova. The local organizer was Keelan Evanini, ably assisted by a phenomenal team from the Educational Testing Service. The mentoring chair was Svetlana Stoyanchev and the sponsorship chair was Giuseppe Di Fabbrizio. Sponsorships came from the Educational Testing Service, Microsoft Research, Amazon.com, Yahoo! Labs, Honda Research Institute, the Linguistic Data Consortium, Mitsubishi Electric Research Laboratories, the University of Pennsylvania Linguistics Department, AT&amp;amp;T Labs Research, the PARLANCE project and the SENSEI project. For SIGDIAL 2014, 67 papers were submitted (including 4 demo proposals),  30 papers and all 4 demo proposals were accepted, and 9 papers were accepted to the Dialog State Tracking Challenge special session. There were 67 pre-registered attendees to SIGDIAL alone, and an additional 28 pre-registered attendees who registered for both SIGDIAL and INLG. As is now SIGDIAL tradition, we video-recorded oral presentations with permission of the presenters; these were processed by Superlectures.com and are archived online. &lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2015 will be help in Prague on September 2-4, just before INTERSPEECH 2015. The general chairs are Alexander Koller and Gabriel Skantze. The program chairs are Masahiro Araki and Carolyn Penstein-Rose. The local chair is Filip Jurcicek. The mentoring chair is Svetlana Stoyanchev and the sponsorships chair is Kristy Boyer. There is one special session, MultiLing 2015. &lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2016 will be held near INTERSPEECH 2016 in sunny LA at the Institute for Creative Technologies. The general chairs are Wolfgang Minker and Raquel Fernandez. The program chairs are Giuseppe Carenini and Ruichiro Higashinaka. The local organizers are Ron Artstein and Alesia Gainer. The mentoring chair is Pierre Lison. The sponsorships chair is Ethan Selfridge. The organizers will solicit special session proposals in early spring 2016.&lt;br /&gt;
&lt;br /&gt;
SIGdial also endorses a number of other dialogue-related workshops and events that are open to the general community. The SIGdial Endorsed events for 2015 include:&lt;br /&gt;
&lt;br /&gt;
* June 2014: The 3rd Dialog State Tracking Challenge &lt;br /&gt;
* January 2015:  IWSDS 2015&lt;br /&gt;
* September 2015: Young Researcher&#039;s Roundtable in Spoken Dialogue Systems &lt;br /&gt;
* September 2015: SEMDIAL 2015&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2014Q3_Reports:_SIGDIAL&amp;diff=2410</id>
		<title>2014Q3 Reports: SIGDIAL</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2014Q3_Reports:_SIGDIAL&amp;diff=2410"/>
		<updated>2014-06-11T18:07:23Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;REPORT ON SIGdial ACTIVITIES: June 2013 to June 2014&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Amanda Stent, SIGdial President &lt;br /&gt;
&lt;br /&gt;
SIGdial is the ACL and ISCA Special Interest Group on Discourse and Dialogue. More information about SIGdial can be found on our website: http://www.sigdial.org, including an updated calendar of upcoming events, resources, and previous reports. Users can also become members from the website; membership are added to a low-volume, moderated mailing list (mainly conference and job announcements). SIGdial is fully compliant with ACL and ISCA guidelines for SIGs. &lt;br /&gt;
&lt;br /&gt;
The current Executive Board members for SIGdial are Amanda Stent (President), Jason Williams (Vice President), and Kristiina Jokinen (Secretary/Treasurer). The Scientific Advisory Board consists of three appointed and three elected members: Luciana Benotti, Raquel Fernandez Rovira, Kazunori Komatani, Matthew Purver, Verena Rieser, David Traum. Additional positions are President Emeritus: Tim Paek, IUI liaison: Candace Sidner, SIG SLUD/JSAI liaison: Yasuhiro Katagiri, ISCA Liaison: Michael Picheny, AVIOS Liaison: Alexander Rudnicky, SIGSEM Liaison: Harry Bunt, SIGGEN Liaison: Verena Rieser, IEEE SLT Liaison: Fabrice Lefevre, and Mailing List Maintainer: Laurent Romary. &lt;br /&gt;
&lt;br /&gt;
SIGdial has held an annual workshop on discourse and dialogue since 2000. Due to our steady growth of members, starting in 2009, the SIGDIAL venue became a conference and began to recognize Best Paper awards. SIGDIAL 2013 was held was held in Metz, France, in &amp;quot;near co-location&amp;quot; with INTERSPEECH 2013. The General Chairs were Maxine Eskenazi and Michael Strube, and the Program Chairs were Jason Williams and Barbara Di Eugenio. The local organizer was Olivier Pietquin. The Mentoring Chair was Kallirroi Georgila and the Sponsorship Chair was Amanda Stent. Sponsorships came from Amazon, Apple, AT&amp;amp;T, the Heidelberg Institute for Theoretical Studies (HITS), HRI, La Region Lorraine, Microsoft Research, Nuance, Samsung and Supelec. For SIGDIAL 2013, 98 papers and 17 demo proposals were submitted, and 57 papers and 14 demo proposals were accepted.  There were 115 pre-registered attendees.  For the second year in a row, we video-recorded oral presentations with permission of the presenters; these were processed by Superlectures.com and are archived online.  We also provided two student bursaries of $600 each.&lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2013 featured a new initiative, special sessions.  The main SIGDIAL conference is single-track, takes place over two days, and features peer reviewed papers and invited talks.  Special sessions take place on the third day, may take place in parallel, and are more flexible in structure - they are intended to accommodate, for example, shared tasks in dialog and discourse.  At SIGDIAL 2013 there was one special session for the first Dialog State Tracking Challenge.&lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2014 will be help in Philadelphia, PA on June 18-20, just before ACL 2014. The General Chairs are Matthew Stone and Kallirroi Georgila. The Program Chairs are Helen Hastie and Ani Nenkova. The Local Chair is Keelan Evanini. The Mentoring Chair is Svetlana Stoyanchev and the Sponsorships Chair is Giuseppe Di Fabbrizio. Sponsors include Amazon, AT&amp;amp;T, the Educational Testing Service (ETS), HRI, the LDC, Microsoft Research, Mitsubishi Electric, the Parlance and SENSEI projects, the University of Pennsylvania, and Yahoo Labs.  There is one special session for the second Dialog State Tracking Challenge.  SIGDIAL 2014 is held in colocation and collaboration with INLG 2014, and there is a joint session between the two conferences, to be held on the afternoon of June 19.  This joint session features oral presentations and posters.  The banquet will be held jointly with INLG just after the joint session.  &lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2015 will be held just before INTERSPEECH 2015.  The General Chairs are Alexander Koller and Gabriel Skantze and the Program Chairs are Carolyn Penstein Rose and Masahiro Araki.&lt;br /&gt;
&lt;br /&gt;
SIGdial also endorses a number of other dialogue-related workshops and events that are open to the general community. The SIGdial Endorsed events for 2014 include: &lt;br /&gt;
 * January 2014:  IWSDS&lt;br /&gt;
 * June 2014: Young Researcher&#039;s Roundtable in Spoken Dialogue Systems &lt;br /&gt;
 * June 2014: The 2nd Dialog State Tracking Challenge &lt;br /&gt;
 * June 2014: The REAL Challenge&lt;br /&gt;
 * September 2014: SEMDIAL (DialWatt) &lt;br /&gt;
&lt;br /&gt;
In the fall of 2014 the SIGdial board will discuss and then SIGdial will hold a vote on proposed constitutional amendments including adding to the executive board a treasurer.  In the spring of 2015 SIGdial will hold an election.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2014Q3_Reports:_SIGDIAL&amp;diff=2409</id>
		<title>2014Q3 Reports: SIGDIAL</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2014Q3_Reports:_SIGDIAL&amp;diff=2409"/>
		<updated>2014-06-11T18:05:36Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;REPORT ON SIGdial ACTIVITIES: June 2013 to June 2014&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Amanda Stent, SIGdial President &lt;br /&gt;
&lt;br /&gt;
SIGdial is the ACL and ISCA Special Interest Group on Discourse and Dialogue. More information about SIGdial can be found on our website: http://www.sigdial.org, including an updated calendar of upcoming events, resources, and previous reports. Users can also become members from the website; membership are added to a low-volume, moderated mailing list (mainly conference and job announcements). SIGdial is fully compliant with ACL and ISCA guidelines for SIGs. &lt;br /&gt;
&lt;br /&gt;
The current Executive Board members for SIGdial are Amanda Stent (President), Jason Williams (Vice President), and Kristiina Jokinen (Secretary/Treasurer). The Scientific Advisory Board consists of three appointed and three elected members: Luciana Benotti, Raquel Fernandez Rovira, Kazunori Komatani, Matthew Purver, Verena Rieser, David Traum. Additional positions are President Emeritus: Tim Paek, IUI liaison: Candace Sidner, SIG SLUD/JSAI liaison: Yasuhiro Katagiri, ISCA Liaison: Michael Picheny, AVIOS Liaison: Alexander Rudnicky, SIGSEM Liaison: Harry Bunt, SIGGEN Liaison: Verena Rieser, IEEE SLT Liaison: Fabrice Lefevre, and Mailing List Maintainer: Laurent Romary. &lt;br /&gt;
&lt;br /&gt;
SIGdial has held an annual workshop on discourse and dialogue since 2000. Due to our steady growth of members, starting in 2009, the SIGDIAL venue became a conference and began to recognize Best Paper awards. SIGDIAL 2013 was held was held in Metz, France, in &amp;quot;near co-location&amp;quot; with INTERSPEECH 2013. The General Chairs were Maxine Eskenazi and Michael Strube, and the Program Chairs were Jason Williams and Barbara Di Eugenio. The local organizer was Olivier Pietquin. The Mentoring Chair was Kallirroi Georgila and the Sponsorship Chair was Amanda Stent. Sponsorships came from Amazon, Apple, AT&amp;amp;T, the Heidelberg Institute for Theoretical Studies (HITS), HRI, La Region Lorraine, Microsoft Research, Nuance, Samsung and Supelec. For SIGDIAL 2013, 98 papers and 17 demo proposals were submitted, and 57 papers and 14 demo proposals were accepted.  There were 115 pre-registered attendees.  For the second year in a row, we video-recorded oral presentations with permission of the presenters; these were processed by Superlectures.com and are archived online.  We also provided two student bursaries of $600 each.&lt;br /&gt;
SIGDIAL 2013 featured a new initiative, special sessions.  The main SIGDIAL conference is single-track, takes place over two days, and features peer reviewed papers and invited talks.  Special sessions take place on the third day, may take place in parallel, and are more flexible in structure - they are intended to accommodate, for example, shared tasks in dialog and discourse.  At SIGDIAL 2013 there was one special session for the first Dialog State Tracking Challenge.&lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2014 will be help in Philadelphia, PA on June 18-20, just before ACL 2014. The General Chairs are Matthew Stone and Kallirroi Georgila. The Program Chairs are Helen Hastie and Ani Nenkova. The Local Chair is Keelan Evanini. The Mentoring Chair is Svetlana Stoyanchev and the Sponsorships Chair is Giuseppe Di Fabbrizio. Sponsors include Amazon, AT&amp;amp;T, the Educational Testing Service (ETS), HRI, the LDC, Microsoft Research, Mitsubishi Electric, the Parlance and SENSEI projects, the University of Pennsylvania, and Yahoo Labs.  There is one special session for the second Dialog State Tracking Challenge.  SIGDIAL 2014 is held in colocation and collaboration with INLG 2014, and there is a joint session between the two conferences, to be held on the afternoon of June 19.  This joint session features oral presentations and posters.  The banquet will be held jointly with INLG just after the joint session.  &lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2015 will be held just before INTERSPEECH 2015.  The General Chairs are Alexander Koller and Gabriel Skantze and the Program Chairs are Carolyn Penstein Rose and Masahiro Araki.&lt;br /&gt;
&lt;br /&gt;
SIGdial also endorses a number of other dialogue-related workshops and events that are open to the general community. The SIGdial Endorsed events for 2014 include: &lt;br /&gt;
   - January 2014:  IWSDS&lt;br /&gt;
-- June 2014: Young Researcher&#039;s Roundtable in Spoken Dialogue Systems &lt;br /&gt;
-- June 2014: The 2nd Dialog State Tracking Challenge &lt;br /&gt;
-- June 2014: The REAL Challenge&lt;br /&gt;
-- September 2014: SEMDIAL (DialWatt) &lt;br /&gt;
&lt;br /&gt;
In the fall of 2014 the SIGdial board will discuss and then SIGdial will hold a vote on proposed constitutional amendments including adding to the executive board a treasurer.  In the spring of 2015 SIGdial will hold an election.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2014Q3_Reports:_SIGDIAL&amp;diff=2408</id>
		<title>2014Q3 Reports: SIGDIAL</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2014Q3_Reports:_SIGDIAL&amp;diff=2408"/>
		<updated>2014-06-11T18:05:08Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: Created page with &amp;quot;&amp;#039;&amp;#039;&amp;#039;REPORT ON SIGdial ACTIVITIES: June 2013 to June 2014&amp;#039;&amp;#039;&amp;#039;  Amanda Stent, SIGdial President   SIGdial is the ACL and ISCA Special Interest Group on Discourse and Dialogue. Mor...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;REPORT ON SIGdial ACTIVITIES: June 2013 to June 2014&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Amanda Stent, SIGdial President &lt;br /&gt;
&lt;br /&gt;
SIGdial is the ACL and ISCA Special Interest Group on Discourse and Dialogue. More information about SIGdial can be found on our website: http://www.sigdial.org, including an updated calendar of upcoming events, resources, and previous reports. Users can also become members from the website; membership are added to a low-volume, moderated mailing list (mainly conference and job announcements). SIGdial is fully compliant with ACL and ISCA guidelines for SIGs. &lt;br /&gt;
&lt;br /&gt;
The current Executive Board members for SIGdial are Amanda Stent (President), Jason Williams (Vice President), and Kristiina Jokinen (Secretary/Treasurer). The Scientific Advisory Board consists of three appointed and three elected members: Luciana Benotti, Raquel Fernandez Rovira, Kazunori Komatani, Matthew Purver, Verena Rieser, David Traum. Additional positions are President Emeritus: Tim Paek, IUI liaison: Candace Sidner, SIG SLUD/JSAI liaison: Yasuhiro Katagiri, ISCA Liaison: Michael Picheny, AVIOS Liaison: Alexander Rudnicky, SIGSEM Liaison: Harry Bunt, SIGGEN Liaison: Verena Rieser, IEEE SLT Liaison: Fabrice Lefevre, and Mailing List Maintainer: Laurent Romary. &lt;br /&gt;
&lt;br /&gt;
SIGdial has held an annual workshop on discourse and dialogue since 2000. Due to our steady growth of members, starting in 2009, the SIGDIAL venue became a conference and began to recognize Best Paper awards. SIGDIAL 2013 was held was held in Metz, France, in &amp;quot;near co-location&amp;quot; with INTERSPEECH 2013. The General Chairs were Maxine Eskenazi and Michael Strube, and the Program Chairs were Jason Williams and Barbara Di Eugenio. The local organizer was Olivier Pietquin. The Mentoring Chair was Kallirroi Georgila and the Sponsorship Chair was Amanda Stent. Sponsorships came from Amazon, Apple, AT&amp;amp;T, the Heidelberg Institute for Theoretical Studies (HITS), HRI, La Region Lorraine, Microsoft Research, Nuance, Samsung and Supelec. For SIGDIAL 2013, 98 papers and 17 demo proposals were submitted, and 57 papers and 14 demo proposals were accepted.  There were 115 pre-registered attendees.  For the second year in a row, we video-recorded oral presentations with permission of the presenters; these were processed by Superlectures.com and are archived online.  We also provided two student bursaries of $600 each.&lt;br /&gt;
SIGDIAL 2013 featured a new initiative, special sessions.  The main SIGDIAL conference is single-track, takes place over two days, and features peer reviewed papers and invited talks.  Special sessions take place on the third day, may take place in parallel, and are more flexible in structure - they are intended to accommodate, for example, shared tasks in dialog and discourse.  At SIGDIAL 2013 there was one special session for the first Dialog State Tracking Challenge.&lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2014 will be help in Philadelphia, PA on June 18-20, just before ACL 2014. The General Chairs are Matthew Stone and Kallirroi Georgila. The Program Chairs are Helen Hastie and Ani Nenkova. The Local Chair is Keelan Evanini. The Mentoring Chair is Svetlana Stoyanchev and the Sponsorships Chair is Giuseppe Di Fabbrizio. Sponsors include Amazon, AT&amp;amp;T, the Educational Testing Service (ETS), HRI, the LDC, Microsoft Research, Mitsubishi Electric, the Parlance and SENSEI projects, the University of Pennsylvania, and Yahoo Labs.  There is one special session for the second Dialog State Tracking Challenge.  SIGDIAL 2014 is held in colocation and collaboration with INLG 2014, and there is a joint session between the two conferences, to be held on the afternoon of June 19.  This joint session features oral presentations and posters.  The banquet will be held jointly with INLG just after the joint session.  &lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2015 will be held just before INTERSPEECH 2015.  The General Chairs are Alexander Koller and Gabriel Skantze and the Program Chairs are Carolyn Penstein Rose and Masahiro Araki.&lt;br /&gt;
&lt;br /&gt;
SIGdial also endorses a number of other dialogue-related workshops and events that are open to the general community. The SIGdial Endorsed events for 2014 include: &lt;br /&gt;
-- January 2014:  IWSDS&lt;br /&gt;
-- June 2014: Young Researcher&#039;s Roundtable in Spoken Dialogue Systems &lt;br /&gt;
-- June 2014: The 2nd Dialog State Tracking Challenge &lt;br /&gt;
-- June 2014: The REAL Challenge&lt;br /&gt;
-- September 2014: SEMDIAL (DialWatt) &lt;br /&gt;
&lt;br /&gt;
In the fall of 2014 the SIGdial board will discuss and then SIGdial will hold a vote on proposed constitutional amendments including adding to the executive board a treasurer.  In the spring of 2015 SIGdial will hold an election.&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2013Q3_Reports&amp;diff=1988</id>
		<title>2013Q3 Reports</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2013Q3_Reports&amp;diff=1988"/>
		<updated>2013-07-12T15:28:28Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ALL REPORTS ARE DUE ON JULY 10, 2013&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Reports from ACL Management&#039;&#039;&#039;&lt;br /&gt;
 &lt;br /&gt;
* [[2013Q3 Reports: Office Manager]] &lt;br /&gt;
* [[2013Q3 Reports: Secretary]]   &lt;br /&gt;
* [[2013Q3 Reports: Treasurer]]  &lt;br /&gt;
* [[2013Q3 Reports: NAACL]]  (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: EACL]]  (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: SIG Officer]]   (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: Conference Officer]]  &lt;br /&gt;
* [[2013Q3 Reports: Information Officer]] (SUBMITTED) &lt;br /&gt;
* [[2013Q3 Reports: Nominating Committee]]   (SUBMITTED)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;ACL 2013&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[2013Q3 Reports: General Chair]]  &lt;br /&gt;
* [[2013Q3 Reports: Program Chairs]]   (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: Local Arrangements Committee]]    (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: Student Research Workshop Chairs]]    (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: Publications Chairs]]   (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: Tutorial Chairs]]   (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: Workshop Chairs]]   (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: Demo Chair]]  &lt;br /&gt;
* [[2013Q3 Reports: Mentoring Chairs]]   (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: Sponsorship Committee]]   (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: Publicity Chairs]]   (SUBMITTED)&lt;br /&gt;
&lt;br /&gt;
* [[2013Q3 Reports: Exhibits]] &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Journals, Publications, and the Web&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[2013Q3 Reports: CL Journal Editor]]  &lt;br /&gt;
* [[2013Q3 Reports: TACL Journal Editor]] &lt;br /&gt;
* [[2013Q3 Reports: Squibs and Comments]]&lt;br /&gt;
* [[2013Q3 Reports: ACL Anthology]] (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: ACL Web Site]] (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: ACL Portal]]  (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: ACL Wiki]]  (SUBMITTED)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recent Conferences&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[2013Q3 Reports: COLING 2012]]   (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: ACL 2013]] &lt;br /&gt;
* [[2013Q3 Reports: NAACL 2013]]    (SUBMITTED)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Future Conferences&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[2013Q3 Reports: EMNLP 2013]] &lt;br /&gt;
* [[2013Q3 Reports: IJCNLP 2013]] &lt;br /&gt;
* [[2013Q3 Reports: COLING 2014]]  &lt;br /&gt;
* [[2013Q3 Reports: EACL 2014]]  (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: ACL 2014]]  (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: ACL-IJCNLP 2015]]  (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: ACL 2016]]  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Other Reports&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[2013Q3 Reports: AFNLP Representative]]   (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: Linguistics Olympiads 2012]]  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Special Interest Groups&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[2013Q3 Reports: SIGANN]]   (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: SIGBioMed]] &lt;br /&gt;
* [[2013Q3 Reports: SIGDAT]]    (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: SIGDIAL]]  (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: SIGFSM]]   (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: SIGGEN]]   (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: SIGHAN]]   (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: SIGHUM]]   (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: SIGLEX]]   (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: SIGMOL]]    &lt;br /&gt;
* [[2013Q3 Reports: SIGMORPHON]] &lt;br /&gt;
* [[2013Q3 Reports: SIGMT]]  (SUBMITTED)     &lt;br /&gt;
* [[2013Q3 Reports: SIGNLL]]   (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: SIGPARSE]]   (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: SIGSEM]]    (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: SIGSEMITIC]]    &lt;br /&gt;
* [[2013Q3 Reports: SIGSLPAT]] (SUBMITTED)&lt;br /&gt;
* [[2013Q3 Reports: SIGWAC]] (SUBMITTED)&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2013Q3_Reports:_SIGDIAL&amp;diff=1987</id>
		<title>2013Q3 Reports: SIGDIAL</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2013Q3_Reports:_SIGDIAL&amp;diff=1987"/>
		<updated>2013-07-12T15:27:09Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;REPORT ON SIGdial ACTIVITIES: June 2012 to June 2013&lt;br /&gt;
&lt;br /&gt;
Amanda Stent, SIGdial President&lt;br /&gt;
&lt;br /&gt;
SIGdial is the ACL and ISCA Special Interest Group on Discourse and Dialogue. More information about SIGdial can be found on our website: http://www.sigdial.org, including an updated calendar of upcoming events, resources, and previous reports. Users can also become members from the website; membership are added to a low-volume, moderated mailing list (mainly conference and job announcements). SIGdial is fully compliant with ACL and ISCA guidelines for SIGs.&lt;br /&gt;
&lt;br /&gt;
In February/March of 2013, SIGdial, with the invaluable help of YRRSDS, held our biennial election. The current Executive Board members are Amanda Stent (President), Jason Williams (Vice President), and Kristiina Jokinen (Secretary/Treasurer). The Scientific Advisory Board consists of three appointed and three elected members: Luciana Benotti, Raquel Fernandez Rovira, Kazunori Komatani, Matthew Purver, Verena Rieser, David Traum.  Additional positions are President Emeritus: Tim Paek, IUI liaison:  Candace Sidner, SIG SLUD/JSAI liaison: Yasuhiro Katagiri, ISCA Liaison: Michael Picheny, AVIOS Liaison: Alexander Rudnicky, SIGSEM Liaison: Harry Bunt, SIGGEN Liaison: Verena Rieser, IEEE SLT Liaison: Fabrice Lefevre, and Mailing List Maintainer: Laurent Romary.&lt;br /&gt;
&lt;br /&gt;
SIGdial has held an annual workshop on discourse and dialogue since 2000. Due to our steady growth of members, starting in 2009, the SIGDIAL venue became a conference and began to recognize Best Paper awards. SIGDIAL 2012 was held was held in Seoul, South Korea, in &amp;quot;near co-location&amp;quot; with ACL 2012. The General Chairs were Gary Geunbae Lee and Jonathan Ginzburg, and the Program Chairs were Amanda Stent and Claire Gardent. The local organizing committee comprised Minhwa Chung, Hyung Soon Kim, Jungyun Seo and Sunhee Kim.  The Mentoring Chair was Kallirroi Georgila and the Sponsorship Chair was Jason Williams.&lt;br /&gt;
Sponsorships came from AT&amp;amp;T, AVIOS, Honda Research Institute, Microsoft Research, IBM Research, Korea Telecom, NHN Corporation and Seoul National University. For SIGDIAL 2012, 63 papers were submitted, 40 of which were accepted as long papers, 19 as short papers and 4 as demo papers.&lt;br /&gt;
&lt;br /&gt;
For SIGDIAL 2012, a number of new initiatives were implemented. First, the SIGDIAL Board decided to video-record selected talks and archive them on the web. Toward this effort, we received a generous grant of 1,000 Euros from the ISCA Interspeech 2010 Board. Second, the Executive Board and the Conference Committee decided to fund student travel via the ISCA Student Travel Grant application process. For travel to Seoul Korea, SIGDIAL funded 3 students at $600 each. ISCA&lt;br /&gt;
also generously funded 3 students.&lt;br /&gt;
&lt;br /&gt;
The videos from SIGDIAL 2012 have been viewed a total of about 625 times. Some of the views are partial views, where someone watches only a portion of a talk. Complete talks have been watched about 150 times.&lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2013 will be help in Lyon, France on August 22-24 in co-location with INTERSPEECH 2013. The General Chairs are Maxine&lt;br /&gt;
Eskenazi and Michael Strube.  The Program Chairs are Barbara DiEugenio, Jason Williams.  The Local Chair is Olivier Pietquin.  The&lt;br /&gt;
Mentoring Chair is Kallirroi Georgila and the Sponsorships Chair is Amanda Stent.  Sponsors include Apple, AT&amp;amp;T Labs - Research, HITS, Microsoft, Nuance, and Samsung.  SIGDIAL 2013 has been extended by half a day compared with previous SIGDIAL meetings, to accommodate the colocation of the Dialog State Tracking Challenge.  As with SIGDIAL 2012, we will video record and archive oral presentations, and sponsor student travel grants.&lt;br /&gt;
&lt;br /&gt;
SIGdial also endorses a number of other dialogue-related workshops and events that are open to the general community. The SIGdial Endorsed&lt;br /&gt;
events for 2013 are:&lt;br /&gt;
* August, 2013:  Young Researcher&#039;s Roundtable in Spoken Dialogue Systems&lt;br /&gt;
* August, 2013:  Dialog State Tracking Challenge&lt;br /&gt;
* December, 2013:  SEMDIAL (DialDam)&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2013Q3_Reports:_SIGDIAL&amp;diff=1986</id>
		<title>2013Q3 Reports: SIGDIAL</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2013Q3_Reports:_SIGDIAL&amp;diff=1986"/>
		<updated>2013-07-12T15:26:11Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;REPORT ON SIGdial ACTIVITIES: June 2012 to June 2013&lt;br /&gt;
&lt;br /&gt;
Amanda Stent, SIGdial President&lt;br /&gt;
&lt;br /&gt;
SIGdial is the ACL and ISCA Special Interest Group on Discourse and Dialogue. More information about SIGdial can be found on our website: http://www.sigdial.org, including an updated calendar of upcoming events, resources, and previous reports. Users can also become members from the website; membership are added to a low-volume, moderated mailing list (mainly conference and job announcements). SIGdial is fully compliant with ACL and ISCA guidelines for SIGs.&lt;br /&gt;
&lt;br /&gt;
In February/March of 2013, SIGdial, with the invaluable help of YRRSDS, held our biennial election. The current Executive Board members are Amanda Stent (President), Jason Williams (Vice President), and Kristiina Jokinen (Secretary/Treasurer). The Scientific Advisory Board consists of three appointed and three elected members: Luciana Benotti, Raquel Fernandez Rovira, Kazunori Komatani, Matthew Purver, Verena Rieser, David Traum.  Additional positions are President Emeritus: Tim Paek, IUI liaison:  Candace Sidner, SIG SLUD/JSAI liaison: Yasuhiro Katagiri, ISCA Liaison: Michael Picheny, AVIOS Liaison: Alexander Rudnicky, SIGSEM Liaison: Harry Bunt, SIGGEN Liaison: Verena Rieser, IEEE SLT Liaison: Fabrice Lefevre, and Mailing List Maintainer: Laurent Romary.&lt;br /&gt;
&lt;br /&gt;
SIGdial has held an annual workshop on discourse and dialogue since 2000. Due to our steady growth of members, starting in 2009, the SIGDIAL venue became a conference and began to recognize Best Paper awards. SIGDIAL 2012 was held was held in Seoul, South Korea, in &amp;quot;near co-location&amp;quot; with ACL 2012. The General Chairs were Gary Geunbae Lee and Jonathan Ginzburg, and the Program Chairs were Amanda Stent and Claire Gardent. The local organizing committee comprised Minhwa Chung, Hyung Soon Kim, Jungyun Seo and Sunhee Kim.  The Mentoring Chair was Kallirroi Georgila and the Sponsorship Chair was Jason Williams.&lt;br /&gt;
Sponsorships came from AT&amp;amp;T, AVIOS, Honda Research Institute, Microsoft Research, IBM Research, Korea Telecom, NHN Corporation and Seoul National University. For SIGDIAL 2012, 63 papers were submitted, 40 of which were accepted as long papers, 19 as short papers and 4 as demo papers.&lt;br /&gt;
&lt;br /&gt;
For SIGDIAL 2012, a number of new initiatives were implemented. First, the SIGDIAL Board decided to video-record selected talks and archive them on the web. Toward this effort, we received a generous grant of 1,000 Euros from the ISCA Interspeech 2010 Board. Second, the Executive Board and the Conference Committee decided to fund student travel via the ISCA Student Travel Grant application process. For travel to Seoul Korea, SIGDIAL funded 3 students at $600 each. ISCA&lt;br /&gt;
also generously funded 3 students.&lt;br /&gt;
&lt;br /&gt;
The videos from SIGDIAL 2012 have been viewed a total of about 625 times. Some of the views are partial views, where someone watches only a portion of a talk. Complete talks have been watched about 150 times.&lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2013 will be help in Lyon, France on August 22-24 in co-location with INTERSPEECH 2013. The General Chairs are Maxine&lt;br /&gt;
Eskenazi and Michael Strube.  The Program Chairs are Barbara DiEugenio, Jason Williams.  The Local Chair is Olivier Pietquin.  The&lt;br /&gt;
Mentoring Chair is Kallirroi Georgila and the Sponsorships Chair is Amanda Stent.  Sponsors include Apple, AT&amp;amp;T Labs - Research, HITS, Microsoft, Nuance, and Samsung.  SIGDIAL 2013 has been extended by half a day compared with previous SIGDIAL meetings, to accommodate the colocation of the Dialog State Tracking Challenge.  As with SIGDIAL 2012, we will video record and archive oral presentations, and sponsor student travel grants.&lt;br /&gt;
&lt;br /&gt;
SIGdial also endorses a number of other dialogue-related workshops and events that are open to the general community. The SIGdial Endorsed&lt;br /&gt;
events for 2013 are:&lt;br /&gt;
   - August, 2013:  Young Researcher&#039;s Roundtable in Spoken Dialogue Systems&lt;br /&gt;
   - August, 2013:  Dialog State Tracking Challenge&lt;br /&gt;
   - December, 2013:  SEMDIAL (DialDam)&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2013Q3_Reports:_SIGDIAL&amp;diff=1985</id>
		<title>2013Q3 Reports: SIGDIAL</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2013Q3_Reports:_SIGDIAL&amp;diff=1985"/>
		<updated>2013-07-12T15:25:59Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;REPORT ON SIGdial ACTIVITIES: June 2012 to June 2013&lt;br /&gt;
&lt;br /&gt;
Amanda Stent, SIGdial President&lt;br /&gt;
&lt;br /&gt;
SIGdial is the ACL and ISCA Special Interest Group on Discourse and Dialogue. More information about SIGdial can be found on our website: http://www.sigdial.org, including an updated calendar of upcoming events, resources, and previous reports. Users can also become members from the website; membership are added to a low-volume, moderated mailing list (mainly conference and job announcements). SIGdial is fully compliant with ACL and ISCA guidelines for SIGs.&lt;br /&gt;
&lt;br /&gt;
In February/March of 2013, SIGdial, with the invaluable help of YRRSDS, held our biennial election. The current Executive Board members are Amanda Stent (President), Jason Williams (Vice President), and Kristiina Jokinen (Secretary/Treasurer). The Scientific Advisory Board consists of three appointed and three elected members: Luciana Benotti, Raquel Fernandez Rovira, Kazunori Komatani, Matthew Purver, Verena Rieser, David Traum.  Additional positions are President Emeritus: Tim Paek, IUI liaison:  Candace Sidner, SIG SLUD/JSAI liaison: Yasuhiro Katagiri, ISCA Liaison: Michael Picheny, AVIOS Liaison: Alexander Rudnicky, SIGSEM Liaison: Harry Bunt, SIGGEN Liaison: Verena Rieser, IEEE SLT Liaison: Fabrice Lefevre, and Mailing List Maintainer: Laurent Romary.&lt;br /&gt;
&lt;br /&gt;
SIGdial has held an annual workshop on discourse and dialogue since 2000. Due to our steady growth of members, starting in 2009, the SIGDIAL venue became a conference and began to recognize Best Paper awards. SIGDIAL 2012 was held was held in Seoul, South Korea, in &amp;quot;near co-location&amp;quot; with ACL 2012. The General Chairs were Gary Geunbae Lee and Jonathan Ginzburg, and the Program Chairs were Amanda Stent and Claire Gardent. The local organizing committee comprised Minhwa Chung, Hyung Soon Kim, Jungyun Seo and Sunhee Kim.  The Mentoring Chair was Kallirroi Georgila and the Sponsorship Chair was Jason Williams.&lt;br /&gt;
Sponsorships came from AT&amp;amp;T, AVIOS, Honda Research Institute, Microsoft Research, IBM Research, Korea Telecom, NHN Corporation and Seoul National University. For SIGDIAL 2012, 63 papers were submitted, 40 of which were accepted as long papers, 19 as short papers and 4 as demo papers.&lt;br /&gt;
&lt;br /&gt;
For SIGDIAL 2012, a number of new initiatives were implemented. First, the SIGDIAL Board decided to video-record selected talks and archive them on the web. Toward this effort, we received a generous grant of 1,000 Euros from the ISCA Interspeech 2010 Board. Second, the Executive Board and the Conference Committee decided to fund student travel via the ISCA Student Travel Grant application process. For travel to Seoul Korea, SIGDIAL funded 3 students at $600 each. ISCA&lt;br /&gt;
also generously funded 3 students.&lt;br /&gt;
&lt;br /&gt;
The videos from SIGDIAL 2012 have been viewed a total of about 625 times. Some of the views are partial views, where someone watches only a portion of a talk. Complete talks have been watched about 150 times.&lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2013 will be help in Lyon, France on August 22-24 in co-location with INTERSPEECH 2013. The General Chairs are Maxine&lt;br /&gt;
Eskenazi and Michael Strube.  The Program Chairs are Barbara DiEugenio, Jason Williams.  The Local Chair is Olivier Pietquin.  The&lt;br /&gt;
Mentoring Chair is Kallirroi Georgila and the Sponsorships Chair is Amanda Stent.  Sponsors include Apple, AT&amp;amp;T Labs - Research, HITS, Microsoft, Nuance, and Samsung.  SIGDIAL 2013 has been extended by half a day compared with previous SIGDIAL meetings, to accommodate the colocation of the Dialog State Tracking Challenge.  As with SIGDIAL 2012, we will video record and archive oral presentations, and sponsor student travel grants.&lt;br /&gt;
&lt;br /&gt;
SIGdial also endorses a number of other dialogue-related workshops and events that are open to the general community. The SIGdial Endorsed&lt;br /&gt;
events for 2013 are:&lt;br /&gt;
  -- August, 2013:  Young Researcher&#039;s Roundtable in Spoken Dialogue Systems&lt;br /&gt;
  -- August, 2013:  Dialog State Tracking Challenge&lt;br /&gt;
  -- December, 2013:  SEMDIAL (DialDam)&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/adminwiki/index.php?title=2013Q3_Reports:_SIGDIAL&amp;diff=1984</id>
		<title>2013Q3 Reports: SIGDIAL</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/adminwiki/index.php?title=2013Q3_Reports:_SIGDIAL&amp;diff=1984"/>
		<updated>2013-07-12T15:25:38Z</updated>

		<summary type="html">&lt;p&gt;AmandaStent: New page: REPORT ON SIGdial ACTIVITIES: June 2012 to June 2013  Amanda Stent, SIGdial President  SIGdial is the ACL and ISCA Special Interest Group on Discourse and Dialogue. More information about ...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;REPORT ON SIGdial ACTIVITIES: June 2012 to June 2013&lt;br /&gt;
&lt;br /&gt;
Amanda Stent, SIGdial President&lt;br /&gt;
&lt;br /&gt;
SIGdial is the ACL and ISCA Special Interest Group on Discourse and Dialogue. More information about SIGdial can be found on our website: http://www.sigdial.org, including an updated calendar of upcoming events, resources, and previous reports. Users can also become members from the website; membership are added to a low-volume, moderated mailing list (mainly conference and job announcements). SIGdial is fully compliant with ACL and ISCA guidelines for SIGs.&lt;br /&gt;
&lt;br /&gt;
In February/March of 2013, SIGdial, with the invaluable help of YRRSDS, held our biennial election. The current Executive Board members are Amanda Stent (President), Jason Williams (Vice President), and Kristiina Jokinen (Secretary/Treasurer). The Scientific Advisory Board consists of three appointed and three elected members: Luciana Benotti, Raquel Fernandez Rovira, Kazunori Komatani, Matthew Purver, Verena Rieser, David Traum.  Additional positions are President Emeritus: Tim Paek, IUI liaison:  Candace Sidner, SIG SLUD/JSAI liaison: Yasuhiro Katagiri, ISCA Liaison: Michael Picheny, AVIOS Liaison: Alexander Rudnicky, SIGSEM Liaison: Harry Bunt, SIGGEN Liaison: Verena Rieser, IEEE SLT Liaison: Fabrice Lefevre, and Mailing List Maintainer: Laurent Romary.&lt;br /&gt;
&lt;br /&gt;
SIGdial has held an annual workshop on discourse and dialogue since 2000. Due to our steady growth of members, starting in 2009, the SIGDIAL venue became a conference and began to recognize Best Paper awards. SIGDIAL 2012 was held was held in Seoul, South Korea, in &amp;quot;near co-location&amp;quot; with ACL 2012. The General Chairs were Gary Geunbae Lee and Jonathan Ginzburg, and the Program Chairs were Amanda Stent and Claire Gardent. The local organizing committee comprised Minhwa Chung, Hyung Soon Kim, Jungyun Seo and Sunhee Kim.  The Mentoring Chair was Kallirroi Georgila and the Sponsorship Chair was Jason Williams.&lt;br /&gt;
Sponsorships came from AT&amp;amp;T, AVIOS, Honda Research Institute, Microsoft Research, IBM Research, Korea Telecom, NHN Corporation and Seoul National University. For SIGDIAL 2012, 63 papers were submitted, 40 of which were accepted as long papers, 19 as short papers and 4 as demo papers.&lt;br /&gt;
&lt;br /&gt;
For SIGDIAL 2012, a number of new initiatives were implemented. First, the SIGDIAL Board decided to video-record selected talks and archive them on the web. Toward this effort, we received a generous grant of 1,000 Euros from the ISCA Interspeech 2010 Board. Second, the Executive Board and the Conference Committee decided to fund student travel via the ISCA Student Travel Grant application process. For travel to Seoul Korea, SIGDIAL funded 3 students at $600 each. ISCA&lt;br /&gt;
also generously funded 3 students.&lt;br /&gt;
&lt;br /&gt;
The videos from SIGDIAL 2012 have been viewed a total of about 625 times. Some of the views are partial views, where someone watches only a portion of a talk. Complete talks have been watched about 150 times.&lt;br /&gt;
&lt;br /&gt;
SIGDIAL 2013 will be help in Lyon, France on August 22-24 in co-location with INTERSPEECH 2013. The General Chairs are Maxine&lt;br /&gt;
Eskenazi and Michael Strube.  The Program Chairs are Barbara DiEugenio, Jason Williams.  The Local Chair is Olivier Pietquin.  The&lt;br /&gt;
Mentoring Chair is Kallirroi Georgila and the Sponsorships Chair is Amanda Stent.  Sponsors include Apple, AT&amp;amp;T Labs - Research, HITS, Microsoft, Nuance, and Samsung.  SIGDIAL 2013 has been extended by half a day compared with previous SIGDIAL meetings, to accommodate the colocation of the Dialog State Tracking Challenge.  As with SIGDIAL 2012, we will video record and archive oral presentations, and sponsor student travel grants.&lt;br /&gt;
&lt;br /&gt;
SIGdial also endorses a number of other dialogue-related workshops and events that are open to the general community. The SIGdial Endorsed&lt;br /&gt;
events for 2013 are:&lt;br /&gt;
August, 2013:  Young Researcher&#039;s Roundtable in Spoken Dialogue Systems&lt;br /&gt;
August, 2013:  Dialog State Tracking Challenge&lt;br /&gt;
December, 2013:  SEMDIAL (DialDam)&lt;/div&gt;</summary>
		<author><name>AmandaStent</name></author>
	</entry>
</feed>