The Fourth Workshop on Online Abuse and Harms

Event Notification Type: 
Call for Papers
Abbreviated Title: 
4th WOAH
Location: 
Virtually co-located with EMNLP
Friday, 20 November 2020
Contact: 
Workshop organizers
Zeerak Waseem
Submission Deadline: 
Tuesday, 1 September 2020

*** Final Call for Papers ***

Dear colleagues,

The workshop submission date is approaching at speed! The submission date for the research papers is just a week away (01/Sep/2020), and the submission date for shared exploration papers falls just a week after that (07/Sep/2020).

We are excited to seeing your papers!

**** CfP ***

4th WOAH: The 4th Workshop on Online Abuse and Harms (previously the Workshop on Abusive Language Online)

Virtually co-located with EMNLP 2020, 20/11/2020
Submission deadline: September 1, 2019
Website: https://www.workshopononlineabuse.com/home
Submission link: https://www.softconf.com/emnlp2020/WOAH4/

Overview

Digital technologies have brought myriad benefits for society, transforming how people connect, communicate and interact with each other. However, they have also enabled harmful and abusive behaviours, from interpersonal aggression to bullying and hate speech, to reach large audiences and for their negative effects to be amplified. The negative effects are further compounded as marginalised and vulnerable communities are disproportionately at the risk of receiving abuse. As policymakers, civil society and tech companies devote more resources and time to tackle online abuse, there is a pressing need for scientific research that rigorously investigates how we define harms, how it is detected, moderated and countered.

Technical disciplines such as machine learning (ML), natural language processing (NLP) and statistics have made substantial advances in detecting and modelling online abuse. Primarily, this has been through leveraging state-of-the-art ML and NLP techniques, such as contextual word embeddings, transfer learning and graph embeddings. However, concerns have been raised about the potential societal biases that many of these ML-based detection systems reflect, propagate and sometimes amplify. These concerns are magnified by the lack of explainability and transparency of these models. For example, many detection systems have different error rates for content produced by different people or perform better at detecting certain types of abuse. Such issues are not purely engineering challenges but raise fundamental questions of fairness and social harms: any interventions that employ biased models to detect and moderate online abuse could end up exacerbating the social injustices they aim to counter. For instance, women are 27 times more likely to be the target of online harassment and black people report more incidents of racially motivated online harassment; if tools further exacerbate harms through poor classification performance, such marginalized communities can face additional barriers to digital spaces. Developing reliable and robust tools, developed in collaboration with key stakeholders such as the policy-makers and in particular civil society, is crucial as the field matures and automated detection systems become ubiquitous online.

For the fourth edition of the Workshop on Online Abuse and Harms (4th WOAH!) we address these issues through our theme: Social Bias and Unfairness in Online Abuse Detection. We continue to emphasize the need for inter-, cross- and anti- disciplinary work on online abuse and harms, and invite paper submissions from a range of fields. These include but are not limited to: NLP, machine learning, computational social sciences, law, politics, psychology, network analysis, sociology and cultural studies. Additionally, in this iteration we invite civil society, in particular individuals and organisations working with women and marginalised communities who are often disproportionately affected by online abuse, to submit reports, case studies, findings, data, and to record their lived experiences. We hope that through these engagements we can develop computational tools which address the issues faced by those on the front-lines of tackling online abuse.

Types of Contributions

Academic/Research Papers

We invite long (8 pages) and short (4 pages) academic/research papers on any of the following general topics.

Related to developing computational models and systems:

NLP models and methods for detecting abusive language online, including, but not limited to hate speech, gender-based violence, cyberbullying etc.
Application of NLP tools to analyze social media content and other large data sets
NLP models for cross-lingual abusive language detection
Computational models for multi-modal abuse detection
Development of corpora and annotation guidelines
Critical algorithm studies with a focus on content moderation technology
Human-Computer Interaction for abusive language detection systems
Best practices for using NLP techniques in watchdog settings
Submissions addressing interpretability and social biases in content moderation technologies

Related to legal, social, and policy considerations of abusive language online:

The social and personal consequences of being the target of abusive language and targeting others with abusive language
Assessment of current (computational and non-computational) methods of addressing abusive language
Legal ramifications of measures taken against abusive language use
Social implications of monitoring and moderating unacceptable content
Considerations of implemented and proposed policies for dealing with abusive language online and the technological means of dealing with it.

Contributions from Civil Society

In addition to academic submissions, we also invite organisations in civil society to
submit reports, case studies and findings on any of the general topics:

Case studies and examples of harassment and abuse experienced online,
Successes and failures of content moderation systems and policies,
Outline of national/global legal and/or technical challenges faced,
Best practices of working in partnership with other actors,
Interventions that have helped victims and survivors of online abuse gather evidence,
Policy, practice and content moderation systems recommendations for tech platforms and researchers, and
Documentation of policy gaps that require data and academic support

Please see the WOAH Call for contributions from Civil society webpage for more details: https://www.workshopononlineabuse.com/cfp/civil-society

Shared Exploration

A special Shared Exploration is being launched this year for the 4th Workshop on Online Abuse and Harms (WOAH). Using the dataset provided by Wulczyn et al. (2017), we are encouraging innovative analyses which align with this year’s Workshop theme: Bias and Unfairness in the Detection of Online Abuse.

Compared with traditional Shared Tasks we have taken a more unorthodox approach. We will review performance on the datasets in accordance with three criteria rather than just one evaluation metric. This means that we can adopt a more holistic approach and reward innovative and rigorous analyses -- rather than basing our assessment on a single metric, which can encourage submissions which have sophisticated engineering but pay less attention to work’s wider impact and significance.

Please see the WOAH shared exploration webpage (https://www.workshopononlineabuse.com/shared-exploration) for more detail.

Submission Information

We will be using the EMNLP 2020 Submission Guidelines. Authors are invited to submit a full paper of up to 8 pages of content with up to 2 additional pages for references. We also invite short papers of up to 4 pages of content, including 2 additional pages for references. We also invite abstract submissions of up to 2 pages, including 1 additional page for references. Accepted papers will be given an additional page of content to address reviewer comments. We also invite papers which describe systems.

Previously published papers cannot be accepted. The submissions will be reviewed by the program committee. As reviewing will be blind, please ensure that papers are anonymous. Self-references that reveal the author's identity, e.g., "We previously showed (Smith, 1991) ...", should be avoided. Instead, use citations such as "Smith previously showed (Smith, 1991) ...".

We have also included conflict of interest in the submission form. You should mark all potential reviewers who have been authors on the paper, are from the same research group or institution, or who have seen versions of this paper or discussed it with you. Finally, we ask that you follow the writing policies described on the website: https://www.workshopononlineabuse.com/policies

Workshop Program

Our plan for this one-day workshop include:

Two keynote presentations from leading experts on the topic of social bias and abuse detection,
A multidisciplinary panel discussion,
A forum for plenary discussion on the issues that researchers and practitioners face in efforts to work with abusive language detection, and
Presentations of original academic work as well as contribution from the civil society

EMNLP has moved to being an entirely virtual conference and WOAH 2020 will now be held remotely via videoconference. As a result, we have widened the programme to include a greater range of talks and activities. We will provide updates as the program is confirmed.

Important Dates

Submission due: September 1, 2019
Author Notification: September 29, 2020
Camera Ready: October 14, 2020
Workshop Date: 20 November, 2020

Organizing Committee

Seyi Akiwowo, Glitch UK
Vinodkumar Prabhakaran, Google Research
Bertie Vidgen, The Alan Turing Institute
Zeerak Waseem, University of Sheffield