Language Proficiency Scoring

Cristina Arhiliuc, Jelena Mitrović, Michael Granitzer


Abstract
The Common European Framework of Reference (CEFR) provides generic guidelines for the evaluation of language proficiency. Nevertheless, for automated proficiency classification systems, different approaches for different languages are proposed. Our paper evaluates and extends the results of an approach to Automatic Essay Scoring proposed as a part of the REPROLANG 2020 challenge. We provide a comparison between our results and the ones from the published paper and we include a new corpus for the English language for further experiments. Our results are lower than the expected ones when using the same approach and the system does not scale well with the added English corpus.
Anthology ID:
2020.lrec-1.690
Volume:
Proceedings of the Twelfth Language Resources and Evaluation Conference
Month:
May
Year:
2020
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
5624–5630
Language:
English
URL:
https://aclanthology.org/2020.lrec-1.690
DOI:
Bibkey:
Cite (ACL):
Cristina Arhiliuc, Jelena Mitrović, and Michael Granitzer. 2020. Language Proficiency Scoring. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 5624–5630, Marseille, France. European Language Resources Association.
Cite (Informal):
Language Proficiency Scoring (Arhiliuc et al., LREC 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.lrec-1.690.pdf