IDS at SemEval-2020 Task 10: Does Pre-trained Language Model Know What to Emphasize?

Jaeyoul Shin, Taeuk Kim, Sang-goo Lee


Abstract
We propose a novel method that enables us to determine words that deserve to be emphasized from written text in visual media, relying only on the information from the self-attention distributions of pre-trained language models (PLMs). With extensive experiments and analyses, we show that 1) our zero-shot approach is superior to a reasonable baseline that adopts TF-IDF and that 2) there exist several attention heads in PLMs specialized for emphasis selection, confirming that PLMs are capable of recognizing important words in sentences.
Anthology ID:
2020.semeval-1.185
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
1371–1376
Language:
URL:
https://aclanthology.org/2020.semeval-1.185
DOI:
10.18653/v1/2020.semeval-1.185
Bibkey:
Cite (ACL):
Jaeyoul Shin, Taeuk Kim, and Sang-goo Lee. 2020. IDS at SemEval-2020 Task 10: Does Pre-trained Language Model Know What to Emphasize?. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1371–1376, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
IDS at SemEval-2020 Task 10: Does Pre-trained Language Model Know What to Emphasize? (Shin et al., SemEval 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.185.pdf