Assessing SRL Frameworks with Automatic Training Data Expansion

Silvana Hartmann, Éva Mújdricza-Maydt, Ilia Kuznetsov, Iryna Gurevych, Anette Frank


Abstract
We present the first experiment-based study that explicitly contrasts the three major semantic role labeling frameworks. As a prerequisite, we create a dataset labeled with parallel FrameNet-, PropBank-, and VerbNet-style labels for German. We train a state-of-the-art SRL tool for German for the different annotation styles and provide a comparative analysis across frameworks. We further explore the behavior of the frameworks with automatic training data generation. VerbNet provides larger semantic expressivity than PropBank, and we find that its generalization capacity approaches PropBank in SRL training, but it benefits less from training data expansion than the sparse-data affected FrameNet.
Anthology ID:
W17-0814
Volume:
Proceedings of the 11th Linguistic Annotation Workshop
Month:
April
Year:
2017
Address:
Valencia, Spain
Editors:
Nathan Schneider, Nianwen Xue
Venue:
LAW
SIG:
SIGANN
Publisher:
Association for Computational Linguistics
Note:
Pages:
115–121
Language:
URL:
https://aclanthology.org/W17-0814
DOI:
10.18653/v1/W17-0814
Bibkey:
Cite (ACL):
Silvana Hartmann, Éva Mújdricza-Maydt, Ilia Kuznetsov, Iryna Gurevych, and Anette Frank. 2017. Assessing SRL Frameworks with Automatic Training Data Expansion. In Proceedings of the 11th Linguistic Annotation Workshop, pages 115–121, Valencia, Spain. Association for Computational Linguistics.
Cite (Informal):
Assessing SRL Frameworks with Automatic Training Data Expansion (Hartmann et al., LAW 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-0814.pdf