Self-training a Constituency Parser using n-gram Trees

Arda Çelebi, Arzucan Özgür


Abstract
In this study, we tackle the problem of self-training a feature-rich discriminative constituency parser. We approach the self-training problem with the assumption that while the full sentence parse tree produced by a parser may contain errors, some portions of it are more likely to be correct. We hypothesize that instead of feeding the parser the guessed full sentence parse trees of its own, we can break them down into smaller ones, namely n-gram trees, and perform self-training on them. We build an n-gram parser and transfer the distinct expertise of the n-gram parser to the full sentence parser by using the Hierarchical Joint Learning (HJL) approach. The resulting jointly self-trained parser obtains slight improvement over the baseline.
Anthology ID:
L14-1448
Volume:
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
Month:
May
Year:
2014
Address:
Reykjavik, Iceland
Editors:
Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
2893–2896
Language:
URL:
http://www.lrec-conf.org/proceedings/lrec2014/pdf/543_Paper.pdf
DOI:
Bibkey:
Cite (ACL):
Arda Çelebi and Arzucan Özgür. 2014. Self-training a Constituency Parser using n-gram Trees. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 2893–2896, Reykjavik, Iceland. European Language Resources Association (ELRA).
Cite (Informal):
Self-training a Constituency Parser using n-gram Trees (Çelebi & Özgür, LREC 2014)
Copy Citation:
PDF:
http://www.lrec-conf.org/proceedings/lrec2014/pdf/543_Paper.pdf