Semantic-Unit-Based Dilated Convolution for Multi-Label Text Classification

Junyang Lin, Qi Su, Pengcheng Yang, Shuming Ma, Xu Sun


Abstract
We propose a novel model for multi-label text classification, which is based on sequence-to-sequence learning. The model generates higher-level semantic unit representations with multi-level dilated convolution as well as a corresponding hybrid attention mechanism that extracts both the information at the word-level and the level of the semantic unit. Our designed dilated convolution effectively reduces dimension and supports an exponential expansion of receptive fields without loss of local information, and the attention-over-attention mechanism is able to capture more summary relevant information from the source context. Results of our experiments show that the proposed model has significant advantages over the baseline models on the dataset RCV1-V2 and Ren-CECps, and our analysis demonstrates that our model is competitive to the deterministic hierarchical models and it is more robust to classifying low-frequency labels
Anthology ID:
D18-1485
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4554–4564
Language:
URL:
https://aclanthology.org/D18-1485
DOI:
10.18653/v1/D18-1485
Bibkey:
Cite (ACL):
Junyang Lin, Qi Su, Pengcheng Yang, Shuming Ma, and Xu Sun. 2018. Semantic-Unit-Based Dilated Convolution for Multi-Label Text Classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4554–4564, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Semantic-Unit-Based Dilated Convolution for Multi-Label Text Classification (Lin et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1485.pdf
Code
 lancopku/SU4MLC
Data
RCV1