In this shared task PragTag for the ArgMining Workshop at EMNLP 2023 we invite the community to explore cross-domain low-resource processing of peer reviews using the recently proposed pragmatic tagging task as an objective, and the recently introduced open multi-domain corpus of peer reviews as a rich auxiliary data source.
Peer review is a key element of the scientific process. Peer review is challenging and could greatly benefit from assistance. At the core of peer review lie review reports -- short argumentative texts where reviewers evaluate the papers and make revision suggestions. Automatic analysis of argumentation in peer reviews has numerous applications, from facilitating meta-scientific analysis of reviewing practices, to aggregating information from multiple reviews and assisting less experienced reviewers.
Yet, several challenges remain open. Peer reviews are scientific text and models pre-trained on general data might suffer from domain shift. The performance of the last-generation LLMs in this domain remains unknown. Reviewing practices and criteria vary across research communities and publication venues, posing an additional challenge to generalization. Finally, reviewing data is scarce and expensive to label.
This competition aims to explore the solutions to these challenges at https://codalab.lisn.upsaclay.fr/competitions/13334.