LMVE at SemEval-2020 Task 4: Commonsense Validation and Explanation Using Pretraining Language Model

Shilei Liu, Yu Guo, BoChao Li, Feiliang Ren


Abstract
This paper introduces our system for commonsense validation and explanation. For Sen-Making task, we use a novel pretraining language model based architecture to pick out one of the two given statements that is againstcommon sense. For Explanation task, we use a hint sentence mechanism to improve the performance greatly. In addition, we propose a subtask level transfer learning to share information between subtasks.
Anthology ID:
2020.semeval-1.70
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
562–568
Language:
URL:
https://aclanthology.org/2020.semeval-1.70
DOI:
10.18653/v1/2020.semeval-1.70
Bibkey:
Cite (ACL):
Shilei Liu, Yu Guo, BoChao Li, and Feiliang Ren. 2020. LMVE at SemEval-2020 Task 4: Commonsense Validation and Explanation Using Pretraining Language Model. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 562–568, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
LMVE at SemEval-2020 Task 4: Commonsense Validation and Explanation Using Pretraining Language Model (Liu et al., SemEval 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.70.pdf