Mutual Information Maximization for Simple and Accurate Part-Of-Speech Induction

Karl Stratos


Abstract
We address part-of-speech (POS) induction by maximizing the mutual information between the induced label and its context. We focus on two training objectives that are amenable to stochastic gradient descent (SGD): a novel generalization of the classical Brown clustering objective and a recently proposed variational lower bound. While both objectives are subject to noise in gradient updates, we show through analysis and experiments that the variational lower bound is robust whereas the generalized Brown objective is vulnerable. We obtain strong performance on a multitude of datasets and languages with a simple architecture that encodes morphology and context.
Anthology ID:
N19-1113
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Jill Burstein, Christy Doran, Thamar Solorio
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1095–1104
Language:
URL:
https://aclanthology.org/N19-1113
DOI:
10.18653/v1/N19-1113
Bibkey:
Cite (ACL):
Karl Stratos. 2019. Mutual Information Maximization for Simple and Accurate Part-Of-Speech Induction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1095–1104, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Mutual Information Maximization for Simple and Accurate Part-Of-Speech Induction (Stratos, NAACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/N19-1113.pdf
Video:
 https://aclanthology.org/N19-1113.mp4
Code
 karlstratos/mmi-tagger