Improving LSTM-based Video Description with Linguistic Knowledge Mined from Text

Subhashini Venugopalan1, Lisa Anne Hendricks2, Raymond Mooney3, Kate Saenko4
1The University of Texas at Austin, 2University of California at Berkeley, 3University of Texas at Austin, 4UMass Lowell


Abstract

This paper investigates how linguistic knowledge mined from large text corpora can aid the generation of natural language descriptions of videos. Specifically, we integrate both a neural language model and distributional semantics trained on large text corpora into a recent LSTM-based architecture for video description. We evaluate our approach on a collection of Youtube videos as well as two large movie description datasets showing significant improvements in grammaticality while modestly improving descriptive quality.