On Model Stability as a Function of Random Seed

Pranava Madhyastha, Rishabh Jain


Abstract
In this paper, we focus on quantifying model stability as a function of random seed by investigating the effects of the induced randomness on model performance and the robustness of the model in general. We specifically perform a controlled study on the effect of random seeds on the behaviour of attention, gradient-based and surrogate model based (LIME) interpretations. Our analysis suggests that random seeds can adversely affect the consistency of models resulting in counterfactual interpretations. We propose a technique called Aggressive Stochastic Weight Averaging (ASWA) and an extension called Norm-filtered Aggressive Stochastic Weight Averaging (NASWA) which improves the stability of models over random seeds. With our ASWA and NASWA based optimization, we are able to improve the robustness of the original model, on average reducing the standard deviation of the model’s performance by 72%.
Anthology ID:
K19-1087
Volume:
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Mohit Bansal, Aline Villavicencio
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
929–939
Language:
URL:
https://aclanthology.org/K19-1087
DOI:
10.18653/v1/K19-1087
Bibkey:
Cite (ACL):
Pranava Madhyastha and Rishabh Jain. 2019. On Model Stability as a Function of Random Seed. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 929–939, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
On Model Stability as a Function of Random Seed (Madhyastha & Jain, CoNLL 2019)
Copy Citation:
PDF:
https://aclanthology.org/K19-1087.pdf
Supplementary material:
 K19-1087.Supplementary_Material.zip
Attachment:
 K19-1087.Attachment.zip
Code
 rishj97/ModelStability
Data
AG NewsIMDb Movie ReviewsSNLISST