SIGMORPHON 2020 Task 0 System Description: ETH Zürich Team

Martina Forster, Clara Meister


Abstract
This paper presents our system for the SIGMORPHON 2020 Shared Task. We build off of the baseline systems, performing exact inference on models trained on language family data. Our systems return the globally best solution under these models. Our two systems achieve 80.9% and 75.6% accuracy on the test set. We ultimately find that, in this setting, exact inference does not seem to help or hinder the performance of morphological inflection generators, which stands in contrast to its affect on Neural Machine Translation (NMT) models.
Anthology ID:
2020.sigmorphon-1.10
Volume:
Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
Month:
July
Year:
2020
Address:
Online
Editors:
Garrett Nicolai, Kyle Gorman, Ryan Cotterell
Venue:
SIGMORPHON
SIG:
SIGMORPHON
Publisher:
Association for Computational Linguistics
Note:
Pages:
106–110
Language:
URL:
https://aclanthology.org/2020.sigmorphon-1.10
DOI:
10.18653/v1/2020.sigmorphon-1.10
Bibkey:
Cite (ACL):
Martina Forster and Clara Meister. 2020. SIGMORPHON 2020 Task 0 System Description: ETH Zürich Team. In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 106–110, Online. Association for Computational Linguistics.
Cite (Informal):
SIGMORPHON 2020 Task 0 System Description: ETH Zürich Team (Forster & Meister, SIGMORPHON 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.sigmorphon-1.10.pdf