SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions

Mao Ye, Chengyue Gong, Qiang Liu


Abstract
State-of-the-art NLP models can often be fooled by human-unaware transformations such as synonymous word substitution. For security reasons, it is of critical importance to develop models with certified robustness that can provably guarantee that the prediction is can not be altered by any possible synonymous word substitution. In this work, we propose a certified robust method based on a new randomized smoothing technique, which constructs a stochastic ensemble by applying random word substitutions on the input sentences, and leverage the statistical properties of the ensemble to provably certify the robustness. Our method is simple and structure-free in that it only requires the black-box queries of the model outputs, and hence can be applied to any pre-trained models (such as BERT) and any types of models (world-level or subword-level). Our method significantly outperforms recent state-of-the-art methods for certified robustness on both IMDB and Amazon text classification tasks. To the best of our knowledge, we are the first work to achieve certified robustness on large systems such as BERT with practically meaningful certified accuracy.
Anthology ID:
2020.acl-main.317
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3465–3475
Language:
URL:
https://aclanthology.org/2020.acl-main.317
DOI:
10.18653/v1/2020.acl-main.317
Bibkey:
Cite (ACL):
Mao Ye, Chengyue Gong, and Qiang Liu. 2020. SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3465–3475, Online. Association for Computational Linguistics.
Cite (Informal):
SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions (Ye et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.317.pdf
Video:
 http://slideslive.com/38928757
Code
 lushleaf/Structure-free-certified-NLP
Data
IMDb Movie Reviews