Some Languages Seem Easier to Parse Because Their Treebanks Leak

Anders Søgaard


Abstract
Cross-language differences in (universal) dependency parsing performance are mostly attributed to treebank size, average sentence length, average dependency length, morphological complexity, and domain differences. We point at a factor not previously discussed: If we abstract away from words and dependency labels, how many graphs in the test data were seen in the training data? We compute graph isomorphisms, and show that, treebank size aside, overlap between training and test graphs explain more of the observed variation than standard explanations such as the above.
Anthology ID:
2020.emnlp-main.220
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2765–2770
Language:
URL:
https://aclanthology.org/2020.emnlp-main.220
DOI:
10.18653/v1/2020.emnlp-main.220
Bibkey:
Cite (ACL):
Anders Søgaard. 2020. Some Languages Seem Easier to Parse Because Their Treebanks Leak. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2765–2770, Online. Association for Computational Linguistics.
Cite (Informal):
Some Languages Seem Easier to Parse Because Their Treebanks Leak (Søgaard, EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.220.pdf
Optional supplementary material:
 2020.emnlp-main.220.OptionalSupplementaryMaterial.zip
Video:
 https://slideslive.com/38938710