Zero-Shot Activity Recognition with Verb Attribute Induction

Rowan Zellers, Yejin Choi


Abstract
In this paper, we investigate large-scale zero-shot activity recognition by modeling the visual and linguistic attributes of action verbs. For example, the verb “salute” has several properties, such as being a light movement, a social act, and short in duration. We use these attributes as the internal mapping between visual and textual representations to reason about a previously unseen action. In contrast to much prior work that assumes access to gold standard attributes for zero-shot classes and focuses primarily on object attributes, our model uniquely learns to infer action attributes from dictionary definitions and distributed word representations. Experimental results confirm that action attributes inferred from language can provide a predictive signal for zero-shot prediction of previously unseen activities.
Anthology ID:
D17-1099
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
946–958
Language:
URL:
https://aclanthology.org/D17-1099
DOI:
10.18653/v1/D17-1099
Bibkey:
Cite (ACL):
Rowan Zellers and Yejin Choi. 2017. Zero-Shot Activity Recognition with Verb Attribute Induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 946–958, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Zero-Shot Activity Recognition with Verb Attribute Induction (Zellers & Choi, EMNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/D17-1099.pdf
Code
 uwnlp/verb-attributes +  additional community code
Data
FrameNetImageNet