HotFlip: White-Box Adversarial Examples for Text Classification

Javid Ebrahimi, Anyi Rao, Daniel Lowd, Dejing Dou


Abstract
We propose an efficient method to generate white-box adversarial examples to trick a character-level neural classifier. We find that only a few manipulations are needed to greatly decrease the accuracy. Our method relies on an atomic flip operation, which swaps one token for another, based on the gradients of the one-hot input vectors. Due to efficiency of our method, we can perform adversarial training which makes the model more robust to attacks at test time. With the use of a few semantics-preserving constraints, we demonstrate that HotFlip can be adapted to attack a word-level classifier as well.
Anthology ID:
P18-2006
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
31–36
Language:
URL:
https://aclanthology.org/P18-2006
DOI:
10.18653/v1/P18-2006
Bibkey:
Cite (ACL):
Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-Box Adversarial Examples for Text Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 31–36, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
HotFlip: White-Box Adversarial Examples for Text Classification (Ebrahimi et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-2006.pdf
Poster:
 P18-2006.Poster.pdf
Code
 additional community code
Data
SST