Do Syntax Trees Help Pre-trained Transformers Extract Information?

Devendra Sachan, Yuhao Zhang, Peng Qi, William L. Hamilton


Abstract
Much recent work suggests that incorporating syntax information from dependency trees can improve task-specific transformer models. However, the effect of incorporating dependency tree information into pre-trained transformer models (e.g., BERT) remains unclear, especially given recent studies highlighting how these models implicitly encode syntax. In this work, we systematically study the utility of incorporating dependency trees into pre-trained transformers on three representative information extraction tasks: semantic role labeling (SRL), named entity recognition, and relation extraction. We propose and investigate two distinct strategies for incorporating dependency structure: a late fusion approach, which applies a graph neural network on the output of a transformer, and a joint fusion approach, which infuses syntax structure into the transformer attention layers. These strategies are representative of prior work, but we introduce additional model design elements that are necessary for obtaining improved performance. Our empirical analysis demonstrates that these syntax-infused transformers obtain state-of-the-art results on SRL and relation extraction tasks. However, our analysis also reveals a critical shortcoming of these models: we find that their performance gains are highly contingent on the availability of human-annotated dependency parses, which raises important questions regarding the viability of syntax-augmented transformers in real-world applications.
Anthology ID:
2021.eacl-main.228
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2647–2661
Language:
URL:
https://aclanthology.org/2021.eacl-main.228
DOI:
10.18653/v1/2021.eacl-main.228
Bibkey:
Cite (ACL):
Devendra Sachan, Yuhao Zhang, Peng Qi, and William L. Hamilton. 2021. Do Syntax Trees Help Pre-trained Transformers Extract Information?. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2647–2661, Online. Association for Computational Linguistics.
Cite (Informal):
Do Syntax Trees Help Pre-trained Transformers Extract Information? (Sachan et al., EACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.eacl-main.228.pdf
Software:
 2021.eacl-main.228.Software.tgz
Code
 DevSinghSachan/syntax-augmented-bert
Data
CoNLL-2012TACRED