Deep Inside-outside Recursive Autoencoder with All-span Objective

Ruyue Hong, Jiong Cai, Kewei Tu


Abstract
Deep inside-outside recursive autoencoder (DIORA) is a neural-based model designed for unsupervised constituency parsing. During its forward computation, it provides phrase and contextual representations for all spans in the input sentence. By utilizing the contextual representation of each leaf-level span, the span of length 1, to reconstruct the word inside the span, the model is trained without labeled data. In this work, we extend the training objective of DIORA by making use of all spans instead of only leaf-level spans. We test our new training objective on datasets of two languages: English and Japanese, and empirically show that our method achieves improvement in parsing accuracy over the original DIORA.
Anthology ID:
2020.coling-main.322
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
3610–3615
Language:
URL:
https://aclanthology.org/2020.coling-main.322
DOI:
10.18653/v1/2020.coling-main.322
Bibkey:
Cite (ACL):
Ruyue Hong, Jiong Cai, and Kewei Tu. 2020. Deep Inside-outside Recursive Autoencoder with All-span Objective. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3610–3615, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
Deep Inside-outside Recursive Autoencoder with All-span Objective (Hong et al., COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.322.pdf