Noisy Neural Language Modeling for Typing Prediction in BCI Communication

Rui Dong, David Smith, Shiran Dudy, Steven Bedrick


Abstract
Language models have broad adoption in predictive typing tasks. When the typing history contains numerous errors, as in open-vocabulary predictive typing with brain-computer interface (BCI) systems, we observe significant performance degradation in both n-gram and recurrent neural network language models trained on clean text. In evaluations of ranking character predictions, training recurrent LMs on noisy text makes them much more robust to noisy histories, even when the error model is misspecified. We also propose an effective strategy for combining evidence from multiple ambiguous histories of BCI electroencephalogram measurements.
Anthology ID:
W19-1707
Volume:
Proceedings of the Eighth Workshop on Speech and Language Processing for Assistive Technologies
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Heidi Christensen, Kristy Hollingshead, Emily Prud’hommeaux, Frank Rudzicz, Keith Vertanen
Venue:
SLPAT
SIG:
SIGSLPAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
44–51
Language:
URL:
https://aclanthology.org/W19-1707
DOI:
10.18653/v1/W19-1707
Bibkey:
Cite (ACL):
Rui Dong, David Smith, Shiran Dudy, and Steven Bedrick. 2019. Noisy Neural Language Modeling for Typing Prediction in BCI Communication. In Proceedings of the Eighth Workshop on Speech and Language Processing for Assistive Technologies, pages 44–51, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Noisy Neural Language Modeling for Typing Prediction in BCI Communication (Dong et al., SLPAT 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-1707.pdf
Data
New York Times Annotated Corpus