Not All Reviews Are Equal: Towards Addressing Reviewer Biases for Opinion Summarization

Wenyi Tay


Abstract
Consumers read online reviews for insights which help them to make decisions. Given the large volumes of reviews, succinct review summaries are important for many applications. Existing research has focused on mining for opinions from only review texts and largely ignores the reviewers. However, reviewers have biases and may write lenient or harsh reviews; they may also have preferences towards some topics over others. Therefore, not all reviews are equal. Ignoring the biases in reviews can generate misleading summaries. We aim for summarization of reviews to include balanced opinions from reviewers of different biases and preferences. We propose to model reviewer biases from their review texts and rating distributions, and learn a bias-aware opinion representation. We further devise an approach for balanced opinion summarization of reviews using our bias-aware opinion representation.
Anthology ID:
P19-2005
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Fernando Alva-Manchego, Eunsol Choi, Daniel Khashabi
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
34–42
Language:
URL:
https://aclanthology.org/P19-2005
DOI:
10.18653/v1/P19-2005
Bibkey:
Cite (ACL):
Wenyi Tay. 2019. Not All Reviews Are Equal: Towards Addressing Reviewer Biases for Opinion Summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 34–42, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Not All Reviews Are Equal: Towards Addressing Reviewer Biases for Opinion Summarization (Tay, ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-2005.pdf