Modeling Content and Context with Deep Relational Learning

Maria Leonor Pacheco, Dan Goldwasser


Abstract
Building models for realistic natural language tasks requires dealing with long texts and accounting for complicated structural dependencies. Neural-symbolic representations have emerged as a way to combine the reasoning capabilities of symbolic methods, with the expressiveness of neural networks. However, most of the existing frameworks for combining neural and symbolic representations have been designed for classic relational learning tasks that work over a universe of symbolic entities and relations. In this paper, we present DRaiL, an open-source declarative framework for specifying deep relational models, designed to support a variety of NLP scenarios. Our framework supports easy integration with expressive language encoders, and provides an interface to study the interactions between representation, inference and learning.
Anthology ID:
2021.tacl-1.7
Volume:
Transactions of the Association for Computational Linguistics, Volume 9
Month:
Year:
2021
Address:
Cambridge, MA
Editors:
Brian Roark, Ani Nenkova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
100–119
Language:
URL:
https://aclanthology.org/2021.tacl-1.7
DOI:
10.1162/tacl_a_00357
Bibkey:
Cite (ACL):
Maria Leonor Pacheco and Dan Goldwasser. 2021. Modeling Content and Context with Deep Relational Learning. Transactions of the Association for Computational Linguistics, 9:100–119.
Cite (Informal):
Modeling Content and Context with Deep Relational Learning (Pacheco & Goldwasser, TACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.tacl-1.7.pdf
Video:
 https://aclanthology.org/2021.tacl-1.7.mp4