Yevgen Matusevych


2021

pdf bib
A phonetic model of non-native spoken word processing
Yevgen Matusevych | Herman Kamper | Thomas Schatz | Naomi Feldman | Sharon Goldwater
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Non-native speakers show difficulties with spoken word processing. Many studies attribute these difficulties to imprecise phonological encoding of words in the lexical memory. We test an alternative hypothesis: that some of these difficulties can arise from the non-native speakers’ phonetic perception. We train a computational model of phonetic learning, which has no access to phonology, on either one or two languages. We first show that the model exhibits predictable behaviors on phone-level and word-level discrimination tasks. We then test the model on a spoken word processing task, showing that phonology may not be necessary to explain some of the word processing effects observed in non-native speakers. We run an additional analysis of the model’s lexical representation space, showing that the two training languages are not fully separated in that space, similarly to the languages of a bilingual human speaker.

2019

pdf bib
Are we there yet? Encoder-decoder neural networks as cognitive models of English past tense inflection
Maria Corkery | Yevgen Matusevych | Sharon Goldwater
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

The cognitive mechanisms needed to account for the English past tense have long been a subject of debate in linguistics and cognitive science. Neural network models were proposed early on, but were shown to have clear flaws. Recently, however, Kirov and Cotterell (2018) showed that modern encoder-decoder (ED) models overcome many of these flaws. They also presented evidence that ED models demonstrate humanlike performance in a nonce-word task. Here, we look more closely at the behaviour of their model in this task. We find that (1) the model exhibits instability across multiple simulations in terms of its correlation with human data, and (2) even when results are aggregated across simulations (treating each simulation as an individual human participant), the fit to the human data is not strong—worse than an older rule-based model. These findings hold up through several alternative training regimes and evaluation measures. Although other neural architectures might do better, we conclude that there is still insufficient evidence to claim that neural nets are a good cognitive model for this task.

2018

pdf bib
Modeling bilingual word associations as connected monolingual networks
Yevgen Matusevych | Amir Ardalan Kalantari Dehaghi | Suzanne Stevenson
Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018)

2013

pdf bib
Computational simulations of second language construction learning
Yevgen Matusevych | Afra Alishahi | Ad Backus
Proceedings of the Fourth Annual Workshop on Cognitive Modeling and Computational Linguistics (CMCL)