Beyond Instructional Videos: Probing for More Diverse Visual-Textual Grounding on YouTube

Jack Hessel, Zhenhai Zhu, Bo Pang, Radu Soricut


Abstract
Pretraining from unlabelled web videos has quickly become the de-facto means of achieving high performance on many video understanding tasks. Features are learned via prediction of grounded relationships between visual content and automatic speech recognition (ASR) tokens. However, prior pretraining work has been limited to only instructional videos; a priori, we expect this domain to be relatively “easy:” speakers in instructional videos will often reference the literal objects/actions being depicted. We ask: can similar models be trained on more diverse video corpora? And, if so, what types of videos are “grounded” and what types are not? We fit a representative pretraining model to the diverse YouTube8M dataset, and study its success and failure cases. We find that visual-textual grounding is indeed possible across previously unexplored video categories, and that pretraining on a more diverse set results in representations that generalize to both non-instructional and instructional domains.
Anthology ID:
2020.emnlp-main.709
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8812–8822
Language:
URL:
https://aclanthology.org/2020.emnlp-main.709
DOI:
10.18653/v1/2020.emnlp-main.709
Bibkey:
Cite (ACL):
Jack Hessel, Zhenhai Zhu, Bo Pang, and Radu Soricut. 2020. Beyond Instructional Videos: Probing for More Diverse Visual-Textual Grounding on YouTube. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8812–8822, Online. Association for Computational Linguistics.
Cite (Informal):
Beyond Instructional Videos: Probing for More Diverse Visual-Textual Grounding on YouTube (Hessel et al., EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.709.pdf
Video:
 https://slideslive.com/38938709
Code
 google-research-datasets/i3-video
Data
i3-videoCrossTaskHowTo100MKinetics