Repeat before Forgetting: Spaced Repetition for Efficient and Effective Training of Neural Networks

Hadi Amiri, Timothy Miller, Guergana Savova


Abstract
We present a novel approach for training artificial neural networks. Our approach is inspired by broad evidence in psychology that shows human learners can learn efficiently and effectively by increasing intervals of time between subsequent reviews of previously learned materials (spaced repetition). We investigate the analogy between training neural models and findings in psychology about human memory model and develop an efficient and effective algorithm to train neural models. The core part of our algorithm is a cognitively-motivated scheduler according to which training instances and their “reviews” are spaced over time. Our algorithm uses only 34-50% of data per epoch, is 2.9-4.8 times faster than standard training, and outperforms competing state-of-the-art baselines. Our code is available at scholar.harvard.edu/hadi/RbF/.
Anthology ID:
D17-1255
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2401–2410
Language:
URL:
https://aclanthology.org/D17-1255
DOI:
10.18653/v1/D17-1255
Bibkey:
Cite (ACL):
Hadi Amiri, Timothy Miller, and Guergana Savova. 2017. Repeat before Forgetting: Spaced Repetition for Efficient and Effective Training of Neural Networks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2401–2410, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Repeat before Forgetting: Spaced Repetition for Efficient and Effective Training of Neural Networks (Amiri et al., EMNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/D17-1255.pdf
Video:
 https://aclanthology.org/D17-1255.mp4