Targeted Adversarial Training for Natural Language Understanding

Lis Pereira, Xiaodong Liu, Hao Cheng, Hoifung Poon, Jianfeng Gao, Ichiro Kobayashi


Abstract
We present a simple yet effective Targeted Adversarial Training (TAT) algorithm to improve adversarial training for natural language understanding. The key idea is to introspect current mistakes and prioritize adversarial training steps to where the model errs the most. Experiments show that TAT can significantly improve accuracy over standard adversarial training on GLUE and attain new state-of-the-art zero-shot results on XNLI. Our code will be released upon acceptance of the paper.
Anthology ID:
2021.naacl-main.424
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Editors:
Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5385–5393
Language:
URL:
https://aclanthology.org/2021.naacl-main.424
DOI:
10.18653/v1/2021.naacl-main.424
Bibkey:
Cite (ACL):
Lis Pereira, Xiaodong Liu, Hao Cheng, Hoifung Poon, Jianfeng Gao, and Ichiro Kobayashi. 2021. Targeted Adversarial Training for Natural Language Understanding. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5385–5393, Online. Association for Computational Linguistics.
Cite (Informal):
Targeted Adversarial Training for Natural Language Understanding (Pereira et al., NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.424.pdf
Video:
 https://aclanthology.org/2021.naacl-main.424.mp4
Code
 namisan/mt-dnn
Data
GLUEMRPCMultiNLISNLI