Neural Event Extraction from Movies Description

Alex Tozzo, Dejan Jovanović, Mohamed Amer


Abstract
We present a novel approach for event extraction and abstraction from movie descriptions. Our event frame consists of “who”, “did what” “to whom”, “where”, and “when”. We formulate our problem using a recurrent neural network, enhanced with structural features extracted from syntactic parser, and trained using curriculum learning by progressively increasing the difficulty of the sentences. Our model serves as an intermediate step towards question answering systems, visual storytelling, and story completion tasks. We evaluate our approach on MovieQA dataset.
Anthology ID:
W18-1507
Volume:
Proceedings of the First Workshop on Storytelling
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Editors:
Margaret Mitchell, Ting-Hao ‘Kenneth’ Huang, Francis Ferraro, Ishan Misra
Venue:
Story-NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
60–66
Language:
URL:
https://aclanthology.org/W18-1507
DOI:
10.18653/v1/W18-1507
Bibkey:
Cite (ACL):
Alex Tozzo, Dejan Jovanović, and Mohamed Amer. 2018. Neural Event Extraction from Movies Description. In Proceedings of the First Workshop on Storytelling, pages 60–66, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
Neural Event Extraction from Movies Description (Tozzo et al., Story-NLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-1507.pdf
Data
MovieQA