Bayesian Modeling of Lexical Resources for Low-Resource Settings

Nicholas Andrews, Mark Dredze, Benjamin Van Durme, Jason Eisner


Abstract
Lexical resources such as dictionaries and gazetteers are often used as auxiliary data for tasks such as part-of-speech induction and named-entity recognition. However, discriminative training with lexical features requires annotated data to reliably estimate the lexical feature weights and may result in overfitting the lexical features at the expense of features which generalize better. In this paper, we investigate a more robust approach: we stipulate that the lexicon is the result of an assumed generative process. Practically, this means that we may treat the lexical resources as observations under the proposed generative model. The lexical resources provide training data for the generative model without requiring separate data to estimate lexical feature weights. We evaluate the proposed approach in two settings: part-of-speech induction and low-resource named-entity recognition.
Anthology ID:
P17-1095
Volume:
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2017
Address:
Vancouver, Canada
Editors:
Regina Barzilay, Min-Yen Kan
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1029–1039
Language:
URL:
https://aclanthology.org/P17-1095
DOI:
10.18653/v1/P17-1095
Bibkey:
Cite (ACL):
Nicholas Andrews, Mark Dredze, Benjamin Van Durme, and Jason Eisner. 2017. Bayesian Modeling of Lexical Resources for Low-Resource Settings. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1029–1039, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Bayesian Modeling of Lexical Resources for Low-Resource Settings (Andrews et al., ACL 2017)
Copy Citation:
PDF:
https://aclanthology.org/P17-1095.pdf
Note:
 P17-1095.Notes.zip
Video:
 https://aclanthology.org/P17-1095.mp4