How Effectively Can Machines Defend Against Machine-Generated Fake News? An Empirical Study

Meghana Moorthy Bhat, Srinivasan Parthasarathy


Abstract
We empirically study the effectiveness of machine-generated fake news detectors by understanding the model’s sensitivity to different synthetic perturbations during test time. The current machine-generated fake news detectors rely on provenance to determine the veracity of news. Our experiments find that the success of these detectors can be limited since they are rarely sensitive to semantic perturbations and are very sensitive to syntactic perturbations. Also, we would like to open-source our code and believe it could be a useful diagnostic tool for evaluating models aimed at fighting machine-generated fake news.
Anthology ID:
2020.insights-1.7
Volume:
Proceedings of the First Workshop on Insights from Negative Results in NLP
Month:
November
Year:
2020
Address:
Online
Editors:
Anna Rogers, João Sedoc, Anna Rumshisky
Venue:
insights
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
48–53
Language:
URL:
https://aclanthology.org/2020.insights-1.7
DOI:
10.18653/v1/2020.insights-1.7
Bibkey:
Cite (ACL):
Meghana Moorthy Bhat and Srinivasan Parthasarathy. 2020. How Effectively Can Machines Defend Against Machine-Generated Fake News? An Empirical Study. In Proceedings of the First Workshop on Insights from Negative Results in NLP, pages 48–53, Online. Association for Computational Linguistics.
Cite (Informal):
How Effectively Can Machines Defend Against Machine-Generated Fake News? An Empirical Study (Bhat & Parthasarathy, insights 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.insights-1.7.pdf
Video:
 https://slideslive.com/38940794
Data
RealNews