Short-Term Reform Proposals for ACL Reviewing

From Admin Wiki
Jump to navigation Jump to search

Note: We are soliciting feedback from the ACL community on the short- and long-term reviewing proposals. If you would like to provide feedback, please do so here:

https://forms.office.com/Pages/ResponsePage.aspx?id=9028kaqAQ0OMdrEjlJf7WQiNRJRoOx9OlzQS6C5hck5URVc2MDZPRFBVNDRRRjBaMjBQVk41RVpMOC4u

These short-term proposals were adopted by the ACL Exec on June 8 as an initial step to improve reviewing. As we work towards a more comprehensive reform, we welcome feedback on all the aspects of our current reviewing system (including the short-term proposals). Feedback from our community will be valuable in informing further development of the system.

The rapid growth of submissions and the increasing popularity of preprints have caused four problems to the current ACL reviewing system:

Review is not blind to all papers. Any time a paper is posted on arXiv, it is not blind. The best reviewer for a paper would be familiar with all public work that is relevant to the paper being reviewed, which includes papers that were published on arXiv. When some papers are blind and others are not, our review process is biased and unfair.

Turnaround time is too long. A large factor in the incentive to go to arXiv is that turnaround time from when a paper is finished to when it can be made public can be many months, especially when you get several random rejects. Reducing this turnaround time is key to removing the incentive to go to arXiv before review. If turnaround time is fast enough, we can feel better about banning arXiv before review without worrying that it will drive people away to other venues.

Review quality is too low. Good, experienced reviewers do not have a strong external incentive to do a good job; we are relying on their internal incentives only. New reviewers do not have good mechanisms for getting trained.

Not enough space for all good papers. Program chairs have to reject good papers in order to meet acceptance rates, which are required by some in our community for various reasons. This is unfortunate by itself, and when combined with low review quality and randomness due to just three (perhaps poorly assigned) reviewers, it leads to a large number of resubmissions, dramatically increasing review load.

To address these problems, the ACL Committee on Reviewing has been working on two proposals for reforming the reviewing system of ACL-related conferences: short-term and long-term. This document presents the short term proposals. It consists of four complementary actions that can be realistically implemented to improve the ACL review process in the near future (while the committee continues to investigate changes that require a longer lead time). These actions jointly address the four problems identified below. They are likely to be accepted by the majority of NLP researchers and they involve keeping existing ACL policies in place, including those regarding submission, review and citation.

1 Establishing ACL Archives (a.k.a. Findings)

The pace of NLP research has increased in recent years, both in terms of the volume of papers submitted to top conferences and in perceived reduction in the “half-life” of papers. At the same time, acceptance rates for leading conferences (e.g., ACL, NAACL, EMNLP) have remained at around 20–25% which means that the absolute volume of rejected papers has actually increased. Therefore, a significant number of papers that fall below the acceptance threshold for an individual conference are still likely to be of a publishable standard. This proposal attempts to address this issue, and find a middle ground in maintaining the selectivity of major conferences (a necessary evil for academic prestige) and providing authors of such “publishable” papers the option to have their peer-reviewed papers published. The papers would be published outside of the main conference proceedings in a new journal-like outlet that we call ACL Archives here: (It was given the name Findings of EMNLP at EMNLP 2020.)

  • The review process would be slightly modified so that it’s possible to separate between the paper ranking process that is used to select papers for the conference and the classification of whether a paper has sufficient substance/quality/novelty to warrant publication in the first place. This could be done as follows: during the review process, in addition to existing criteria (e.g., novelty, rigour, and experimental soundness), reviewers, AC and ultimately PC Chairs would assess papers for publishability. PC Chairs could implement this new criterion as an additional Agree/Disagree field in the review form, possibly responding to the following question (using a 4-point Likert scale): Is the paper of a suitable standard to be publishable in its current form, possibly with minor revisions based on reviewer feedback?
  • To authors of papers deemed publishable in principle, PC chairs would offer the possibility to have the papers published with the stamp of peer review, but through an alternative mechanism to the main conference proceedings. To avoid any ambiguity with papers published in the main conference proceedings, a new journal-like publication outlet should be created (e.g. ACL Archives or Findings). This would be indexed in the ACL Anthology and there would be a single issue per conference only. When the decision is made to publish in this form, the research would be considered formally published, precluding the possibility of also submitting the paper to the next conference for publication. Authors could choose to opt in or out. Papers for which this is likely to be an attractive option include those for which publication is particularly time-critical (e.g. the novelty of the work would be diminished if submitted to the next conference).
  • PC Chairs would not offer a presentation slot for these papers at the conference. They could decide to organize an LREC-like mass poster session, as long as the regular papers at the main conference are clearly demarcated and given a greater prominence at the conference, to avoid any further reinforcement of the impression that regular papers with poster presentations are somehow second-rate.

2 Establishing Best Reviewer Award

This action involves encouraging reviewers to do high quality reviews. Authors, other reviewers, and ACs would be asked to rate the quality, tone, helpfulness and accuracy of reviews. PC chairs would be encouraged to establish Best Reviewer Award Committee and give awards to the top k reviewers. The award would be a strong signal on great service that people would include in their CVs. Optionally, ACL could also decide to give the best reviewers discount on conference registration.

3 Training Reviewers

Also, to improve the quality of reviews produced by less experienced members of the PC, PC Chairs could be encouraged to mentor reviewers (e.g., following the model established by ACL 2020 that involves having ACs mentor one or two first-time reviewers). PC Chairs could ask senior reviewers and PhD advisors to help and contribute to a collection of advice (e.g. via blog posts, podcasts, etc. and/or collect the various advice blog posts already produced for previous conferences in one central location). The recipients of the Best Reviewer Awards could also be invited by PC Chairs to write blog posts where they share their experience or they could give tutorials at ACL conferences.

4 Opt-in Revise and Resubmit

One way to reduce current review load is to share the reviews for a rejected and resubmitted paper between two conferences where the authors agree to such review sharing. PC Chairs of one conference (e.g., ACL) could be encouraged to add a field in the submission form to enable authors to indicate that their paper was previously submitted to another conference (e.g., NAACL) to provide the submission number and consent for reviews to be shared. After the submission deadline, PC Chairs would request a one-time report (e.g. a spreadsheet) from the PC chairs of each previous conference containing reviews of these papers. PC chairs could then invite the reviewers of the previous conference to re-review the revised paper, or they could share the previous reviews (anonymized) with the new reviewers. As an alternative, if reviews themselves cannot be shared, reviewers’ names could be shared so that the paper could be sent to (a subset of) the reviewers who first reviewed it.

Such an opt-in revise and resubmit system has been discussed before but has never been implemented by ACL. Making this optional would allow to gauge community interest in the approach before possibly implementing it on a large scale later.