ACM Hypertext-2018 Workshop on Opinion Mining, Summarization and Diversification

Event Notification Type: 
Call for Papers
Abbreviated Title: 
RevOpiD-2018
Location: 
Baltimore
Monday, 9 July 2018 to Thursday, 12 July 2018
State: 
Maryland
Country: 
USA
Contact Email: 
City: 
Baltimore
Contact: 
Anil Kumar Singh
Submission Deadline: 
Tuesday, 10 April 2018

ACM Hypertext-2018 Workshop on Opinion Mining, Summarization and Diversification

Call for Papers and Participation in the Shared Task
Website: https://sites.google.com/view/revopid-2018
Contact email: aksingh.cse [at] iitbhu.ac.in

* Submission Deadline: April 10, 2018 *

This workshop aims at uncovering diverse perspectives to defining opinions. How can opinions be better summarized on online forums, in web search results or elsewhere? What relationships can be mapped between exchange of opinions on the web? We invite submissions on all such relatively unexplored dynamics of opinion mining and modeling.

Through a workshop on Opinion Mining, Summarization and Diversification, we aim to cover the following themes, around which we invite submissions, in the form of original work and progress reports:

* Review Opinion Diversification
* Opinion Modeling techniques
* Text and Sentiment Summarization
* Opinion summarization in ranking
* Exchange of opinions as network graphs
* Joint Topic Sentiment Modeling
* Phrase Embeddings
* Sentiment Normalization on a relative scale
* Paraphrase detection in opinionated text
* Factors affecting likeability of online reviews
* Fake review detection
* Sarcasm detection in online reviews
* Bias propagation on online forums
* Evaluation of opinion diversity
* Evaluation of representativeness and diversity in ranking
* Knowledge Representation methods for opinions

Shared Task

As part of the workshop, we will also be hosting a shared task on Review Opinion Diversification. The shared task aims to identify opinions from online product reviews. By identification of opinions, we don’t just mean string matching with a predefined list. Instead, we reward two systems equally whether they recognize ‘this product is cost-effective’ as an opinion or, instead, ‘this product is inexpensive’ or ‘this product is worth the money.’ We have an annotated dataset of 80+ products, with more than 10,000 reviews in totality, each review being labelled with its constituent opinions in the form of one opinion matrix per product.

Subtask A (Usefulness Ranking)

A supervised task to predict the helpfulness rating of product reviews based on review text. For a review which 3 users rated as helpful and 2 users found not-helpful, will be 3/5.

Subtask B (Representativeness Ranking)

Subtask B judges a system on its ability to tell whether a given review R1 contains a given opinion O1 or not. While R1 can be easily identified by its Reviewer ID, opinions are not labeled with words. Instead, they are identified by the other reviews that they appear in. Therefore, we ask the participants to provide an opinion matrix as output, which we will evaluate using several verified metrics.

Subtask C (Exhaustive Coverage Ranking)

This subtask aims at producing, for each product, top-k reviews from a set of reviews such that the selected top-kreviews act as a summary of all the opinions expressed in the reviews set.

Data and Resources

The training, development and test data has been extracted and annotated from Amazon SNAP Review Dataset and will be available after registration.

Invitation

We invite participation from all researchers and practitioners. The organizers rely, as is usual in shared tasks, on the honesty of all participants who might have some prior knowledge of part of the data that will eventually be used for evaluation, not to unfairly use such knowledge. The only exceptions (to participation) are the members of the organizing team, who cannot submit a system. The organizing chair will serve as an authority to resolve any disputes concerning ethical issues or completeness of system descriptions.

Timeline

Research Papers

Paper Submission Deadline: April 10, 2018
Notification of Acceptance: May 8, 2018
Camera-Ready Deadline: May 19, 2018
Conference Dates: July 9-12, 2018

Shared Task

Registration open: January 26, 2018
Release of Training Data: January 28, 2018
Dryrun: Release of Development Set: February 5, 2018
Dryrun: Submission on Development Set: February 20, 2018
Dryrun: Release of Scores: February 24, 2018
Registration Ends: March 8, 2018
Release of Test Set: March 10, 2018
Submission of Systems: March 17, 2018
System Results: March 25, 2018
System Description Paper Due: April 10, 2018
Notification of Acceptance: May 8, 2018
Camera-Ready Deadline: May 19, 2018
Conference Dates: July 9-12, 2018

See https://sites.google.com/view/revopid-2018 for more information.