Identifying negative language transfer in learner errors using POS information

Leticia Farias Wanderley, Carrie Demmans Epp


Abstract
A common mistake made by language learners is the misguided usage of first language rules when communicating in another language. In this paper, n-gram and recurrent neural network language models are used to represent language structures and detect when Chinese native speakers incorrectly transfer rules from their first language (i.e., Chinese) into their English writing. These models make it possible to inform corrective error feedback with error causes, such as negative language transfer. We report the results of our negative language detection experiments with n-gram and recurrent neural network models that were trained using part-of-speech tags. The best performing model achieves an F1-score of 0.51 when tasked with recognizing negative language transfer in English learner data.
Anthology ID:
2021.bea-1.7
Volume:
Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications
Month:
April
Year:
2021
Address:
Online
Editors:
Jill Burstein, Andrea Horbach, Ekaterina Kochmar, Ronja Laarmann-Quante, Claudia Leacock, Nitin Madnani, Ildikó Pilán, Helen Yannakoudakis, Torsten Zesch
Venue:
BEA
SIG:
SIGEDU
Publisher:
Association for Computational Linguistics
Note:
Pages:
64–74
Language:
URL:
https://aclanthology.org/2021.bea-1.7
DOI:
Bibkey:
Cite (ACL):
Leticia Farias Wanderley and Carrie Demmans Epp. 2021. Identifying negative language transfer in learner errors using POS information. In Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications, pages 64–74, Online. Association for Computational Linguistics.
Cite (Informal):
Identifying negative language transfer in learner errors using POS information (Farias Wanderley & Demmans Epp, BEA 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.bea-1.7.pdf
Code
 EdTeKLA/LanguageTransfer
Data
FCEUniversal Dependencies