Certified Robustness to Programmable Transformations in LSTMs

Yuhao Zhang, Aws Albarghouthi, Loris D’Antoni


Abstract
Deep neural networks for natural language processing are fragile in the face of adversarial examples—small input perturbations, like synonym substitution or word duplication, which cause a neural network to change its prediction. We present an approach to certifying the robustness of LSTMs (and extensions of LSTMs) and training models that can be efficiently certified. Our approach can certify robustness to intractably large perturbation spaces defined programmatically in a language of string transformations. Our evaluation shows that (1) our approach can train models that are more robust to combinations of string transformations than those produced using existing techniques; (2) our approach can show high certification accuracy of the resulting models.
Anthology ID:
2021.emnlp-main.82
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1068–1083
Language:
URL:
https://aclanthology.org/2021.emnlp-main.82
DOI:
10.18653/v1/2021.emnlp-main.82
Bibkey:
Cite (ACL):
Yuhao Zhang, Aws Albarghouthi, and Loris D’Antoni. 2021. Certified Robustness to Programmable Transformations in LSTMs. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1068–1083, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Certified Robustness to Programmable Transformations in LSTMs (Zhang et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.82.pdf
Software:
 2021.emnlp-main.82.Software.zip
Code
 foreverzyh/certified_lstms
Data
IMDb Movie ReviewsSSTSST-2