A Preliminary Study on Evaluating Consultation Notes With Post-Editing

Francesco Moramarco, Alex Papadopoulos Korfiatis, Aleksandar Savkov, Ehud Reiter


Abstract
Automatic summarisation has the potential to aid physicians in streamlining clerical tasks such as note taking. But it is notoriously difficult to evaluate these systems and demonstrate that they are safe to be used in a clinical setting. To circumvent this issue, we propose a semi-automatic approach whereby physicians post-edit generated notes before submitting them. We conduct a preliminary study on the time saving of automatically generated consultation notes with post-editing. Our evaluators are asked to listen to mock consultations and to post-edit three generated notes. We time this and find that it is faster than writing the note from scratch. We present insights and lessons learnt from this experiment.
Anthology ID:
2021.humeval-1.7
Volume:
Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval)
Month:
April
Year:
2021
Address:
Online
Editors:
Anya Belz, Shubham Agarwal, Yvette Graham, Ehud Reiter, Anastasia Shimorina
Venue:
HumEval
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
62–68
Language:
URL:
https://aclanthology.org/2021.humeval-1.7
DOI:
Bibkey:
Cite (ACL):
Francesco Moramarco, Alex Papadopoulos Korfiatis, Aleksandar Savkov, and Ehud Reiter. 2021. A Preliminary Study on Evaluating Consultation Notes With Post-Editing. In Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval), pages 62–68, Online. Association for Computational Linguistics.
Cite (Informal):
A Preliminary Study on Evaluating Consultation Notes With Post-Editing (Moramarco et al., HumEval 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.humeval-1.7.pdf