PoD: Positional Dependency-Based Word Embedding for Aspect Term Extraction

Yichun Yin, Chenguang Wang, Ming Zhang


Abstract
Dependency context-based word embedding jointly learns the representations of word and dependency context, and has been proved effective in aspect term extraction. In this paper, we design the positional dependency-based word embedding (PoD) which considers both dependency context and positional context for aspect term extraction. Specifically, the positional context is modeled via relative position encoding. Besides, we enhance the dependency context by integrating more lexical information (e.g., POS tags) along dependency paths. Experiments on SemEval 2014/2015/2016 datasets show that our approach outperforms other embedding methods in aspect term extraction.
Anthology ID:
2020.coling-main.150
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
1714–1719
Language:
URL:
https://aclanthology.org/2020.coling-main.150
DOI:
10.18653/v1/2020.coling-main.150
Bibkey:
Cite (ACL):
Yichun Yin, Chenguang Wang, and Ming Zhang. 2020. PoD: Positional Dependency-Based Word Embedding for Aspect Term Extraction. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1714–1719, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
PoD: Positional Dependency-Based Word Embedding for Aspect Term Extraction (Yin et al., COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.150.pdf