Effective Attention Modeling for Neural Relation Extraction

Relation extraction is the task of determining the relation between two entities in a sentence. Distantly-supervised models are popular for this task. However, sentences can be long and two entities can be located far from each other in a sentence. The pieces of evidence supporting the presence of a relation between two entities may not be very direct, since the entities may be connected via some indirect links such as a third entity or via co-reference. Relation extraction in such scenarios becomes more challenging as we need to capture the long-distance interactions among the entities and other words in the sentence. Also, the words in a sentence do not contribute equally in identifying the relation between the two entities. To address this issue, we propose a novel and effective attention model which incorporates syntactic information of the sentence and a multi-factor attention mechanism. Experiments on the New York Times corpus show that our proposed model outperforms prior state-of-the-art models.


Introduction
Relation extraction from unstructured text is an important task to build knowledge bases (KB) automatically. Banko et al. (2007) used open information extraction (Open IE) to extract relation triples from sentences where verbs were considered as the relation, whereas supervised information extraction systems extract a set of pre-defined relations from text. Mintz et al. (2009), Riedel et al. (2010), and Hoffmann et al. (2011) proposed distant supervision to generate the training data for sentence-level relation extraction, where relation tuples (two entities and the relation between them) from a knowledge base such as Freebase (Bollacker et al., 2008) were mapped to free text (Wikipedia articles or New York Times articles).
The idea is that if a sentence contains both entities of a tuple, it is chosen as a training sentence of that tuple. Although this process can generate some noisy training instances, it can give a significant amount of training data which can be used to build supervised models for this task. Mintz et al. (2009), Riedel et al. (2010), and Hoffmann et al. (2011) proposed feature-based learning models and used entity tokens and their nearby tokens, their part-of-speech tags, and other linguistic features to train their models. Recently, many neural network-based models have been proposed to avoid feature engineering. Zeng et al. (2014Zeng et al. ( , 2015 used convolutional neural networks (CNN) with max-pooling to find the relation between two given entities. Though these models have been shown to perform reasonably well on distantly supervised data, they sometimes fail to find the relation when sentences are long and entities are located far from each other. CNN models with max-pooling have limitations in understanding the semantic similarity of words with the given entities and they also fail to capture the longdistance dependencies among the words and entities such as co-reference. In addition, all the words in a sentence may not be equally important in finding the relation and this issue is more prominent in long sentences. Prior CNN-based models have limitations in identifying the multiple important factors to focus on in sentence-level relation extraction.
To address this issue, we propose a novel multifactor attention model 1 focusing on the syntactic structure of a sentence for relation extraction. We use a dependency parser to obtain the syntactic structure of a sentence. We use a linear form of attention to measure the semantic similarity of words with the given entities and combine it with the dependency distance of words from the given entities to measure their influence in identifying the relation. Also, single attention may not be able to capture all pieces of evidence for identifying the relation due to normalization of attention scores. Thus we use multi-factor attention in the proposed model. Experiments on the New York Times (NYT) corpus show that the proposed model outperforms prior work in terms of F1 scores on sentence-level relation extraction.

Task Description
Sentence-level relation extraction is defined as follows: Given a sentence S and two entities {E 1 , E 2 } marked in the sentence, find the relation r(E 1 , E 2 ) between these two entities in S from a pre-defined set of relations R ∪ {None}. None indicates that none of the relations in R holds between the two marked entities in the sentence. The relation between the entities is argument orderspecific, i.e., r(E 1 , E 2 ) and r(E 2 , E 1 ) are not the same. Input to the system is a sentence S and two entities E 1 and E 2 , and output is the relation

Model Description
We use four types of embedding vectors in our model: (1) word embedding vector w ∈ R dw (2) entity token indicator embedding vector z ∈ R dz , which indicates if a word belongs to entity 1, entity 2, or does not belong to any entity (3) a positional embedding vector u 1 ∈ R du which represents the linear distance of a word from the start token of entity 1 (4) another positional embedding vector u 2 ∈ R du which represents the linear distance of a word from the start token of entity 2.
We use a bi-directional long short-term memory (Bi-LSTM) (Hochreiter and Schmidhuber, 1997) layer to capture the interaction among words in a sentence S = {w 1 , w 2 , ....., w n }, where n is the sentence length. The input to this layer is the concatenated vector x ∈ R dw+dz of word embedding vector w and entity token indicator embedding vector z.
are the output at the tth step of the forward LSTM and backward LSTM respectively. We concatenate them to obtain the tth Bi-LSTM output h t ∈ R 2(dw+dz) .

Global Feature Extraction
We use a convolutional neural network (CNN) to extract the sentence-level global features for relation extraction. We concatenate the positional embeddings u 1 and u 2 of words with the hidden representation of the Bi-LSTM layer and use the convolution operation with max-pooling on concatenated vectors to extract the global feature vector.
q t ∈ R 2(dw+dz+du) is the concatenated vector for the tth word and f is a convolutional filter vector of dimension 2k(d w + d z + d u ) where k is the filter width. The index i moves from 1 to n and produces a set of scalar values {c 1 , c 2 , ....., c n }.
The max-pooling operation chooses the maximum c max from these values as a feature. With f g number of filters, we get a global feature vector v g ∈ R fg . Figure 1 shows the architecture of our attention model. We use a linear form of attention to find the semantically meaningful words in a sentence with respect to the entities which provide the pieces of evidence for the relation between them. Our attention mechanism uses the entities as attention queries and their vector representation is very important for our model. Named entities mostly consist of multiple tokens and many of them may not be present in the training data or their frequency may be low. The nearby words of an entity can give significant information about the entity. Thus we use the tokens of an entity and its nearby tokens to obtain its vector representation. We use the convolution operation with max-pooling in the context of an entity to get its vector representation. f is a convolutional filter vector of size k

Attention Modeling
where k is the filter width and x is the concatenated vector of word embedding vector (w) and entity token indicator embedding vector (z). b and e are the start and end index of the sequence of words comprising an entity and its neighboring context in the sentence, where 1 ≤ b ≤ e ≤ n. The index i moves from b to e and produces a set of scalar values {c b , c b+1 , ....., c e }. The max-pooling operation chooses the maximum c max from these values as a feature. With f e number of filters, we get the entity vector v e ∈ R fe . We do this for both entities and get v 1 e ∈ R fe and v 2 e ∈ R fe as their vector representation. We adopt a simple linear function as follows to measure the semantic similarity of words with the given entities: represent the semantic similarity score of the ith word and the two given entities.
Not all words in a sentence are equally important in finding the relation between the two entities. The words which are closer to the entities are generally more important. To address this issue, we propose to incorporate the syntactic structure of a sentence in our attention mechanism. The syntactic structure is obtained from the dependency parse tree of the sentence. Words which are closer to the entities in the dependency parse tree are more relevant to finding the relation. In our model, we define the dependency distance to every word from the head token (last token) of an entity as the number of edges along the dependency path (See Figure 2 for an example). We use a distance window size ws and words whose dependency distance is within this window receive attention and the other words are ignored. The details of our attention mechanism follow.
d 1 i and d 2 i are un-normalized attention scores and p 1 i and p 2 i are the normalized attention scores for the ith word with respect to entity 1 and entity 2 respectively. l 1 i and l 2 i are the dependency distances of the ith word from the two entities. We mask those words whose average dependency distance from the two entities is larger than ws. We use the semantic meaning of the words and their dependency distance from the two entities together in our attention mechanism. The attention feature vectors v 1 a and v 1 a with respect to the two entities are determined as follows:

Multi-Factor Attention
Two entities in a sentence, when located far from each other, can be linked via more than one coreference chain or more than one important word. Due to the normalization of the attention scores as described above, single attention cannot capture all relevant information needed to find the relation between two entities. Thus we use a multifactor attention mechanism, where the number of factors is a hyper-parameter, to gather all relevant information for identifying the relation. We replace the attention matrix W a with an attention tensor W 1:m a ∈ R m×2(dw+dz)×2fe where m is the factor count. This gives us m attention vectors with respect to each entity. We concatenate all the feature vectors obtained using these attention vectors to get the multi-attentive feature vector v ma ∈ R 4m(dw+dz) .

Relation Extraction
We concatenate v g , v ma , v 1 e , and v 2 e , and this concatenated feature vector is given to a feed-forward layer with softmax activation to predict the normalized probabilities for the relation labels.
is the bias vector of the feedforward layer for relation extraction, and r is the vector of normalized probabilities of relation labels.

Loss Function
We calculate the loss over each mini-batch of size B. We use the following negative log-likelihood as our objective function for relation extraction: where p(r i |s i , e 1 i , e 2 i , θ) is the conditional probability of the true relation r i when the sentence s i , two entities e 1 i and e 2 i , and the model parameters θ are given.

Datasets
We use the New York Times (NYT) corpus (Riedel et al., 2010) (Bollacker et al., 2008) tuples to New York Times articles. (2) Another version created by Hoffmann et al. (2011) which has 24 valid relations and a None relation. We name this dataset NYT11. The corresponding statistics for NYT11 are given in Table 1. The training dataset is created by aligning Freebase tuples to NYT articles, but the test dataset is manually annotated.

Evaluation Metrics
We use precision, recall, and F1 scores to evaluate the performance of models on relation extraction after removing the None labels. We use a confidence threshold to decide if the relation of a test instance belongs to the set of relations R or None. If the network predicts None for a test instance, then it is considered as None only. But if the network predicts a relation from the set R and the corresponding softmax score is below the confidence threshold, then the final class is changed to None. This confidence threshold is the one that achieves the highest F1 score on the validation dataset. We also include the precision-recall curves for all the models.

Parameter Settings
We run word2vec (Mikolov et al., 2013) on the NYT corpus to obtain the initial word embeddings with dimension d w = 50 and update the embeddings during training. We set the dimension of entity token indicator embedding vector d z = 10 and positional embedding vector d u = 5. The hidden layer dimension of the forward and backward LSTM is 60, which is the same as the dimension of input word representation vector x. The dimension of Bi-LSTM output is 120. We use f g = f e = 230 filters of width k = 3 for feature extraction whenever we apply the convolution operation. We use dropout in our network with a dropout rate of 0.5, and in convolutional layers, we use the tanh activation function. We use the sequence of tokens starting from 5 words before the entity to 5 words after the entity as its context. We train our models with mini-batch size of 50 and optimize the network parameters using the Adagrad optimizer (Duchi et al., 2011). We use the dependency parser from spaCy 2 to obtain the dependency distance of the words from the entities and use ws = 5 as the window size for dependency distance-based attention.

Comparison to Prior Work
We compare our proposed model with the following state-of-the-art models.
(1) CNN (Zeng et al., 2014): Words are represented using word embeddings and two positional embeddings. A convolutional neural net-work (CNN) with max-pooling is applied to extract the sentence-level feature vector. This feature vector is passed to a feed-forward layer with softmax to classify the relation.
(2) PCNN (Zeng et al., 2015): Words are represented using word embeddings and two positional embeddings. A convolutional neural network (CNN) is applied to the word representations. Rather than applying a global max-pooling operation on the entire sentence, three max-pooling operations are applied on three segments of the sentence based on the location of the two entities (hence the name Piecewise Convolutional Neural Network (PCNN)). The first max-pooling operation is applied from the beginning of the sentence to the end of the entity appearing first in the sentence. The second max-pooling operation is applied from the beginning of the entity appearing first in the sentence to the end of the entity appearing second in the sentence. The third max-pooling operation is applied from the beginning of the entity appearing second in the sentence to the end of the sentence. Max-pooled features are concatenated and passed to a feed-forward layer with softmax to determine the relation.
(3) Entity Attention (EA) (Shen and Huang, 2016): This is the combination of a CNN model and an attention model. Words are represented using word embeddings and two positional embeddings. A CNN with max-pooling is used to extract global features. Attention is applied with respect to the two entities separately. The vector representation of every word is concatenated with the word embedding of the last token of the entity. This concatenated representation is passed to a feed-forward layer with tanh activation and then another feed-forward layer to get a scalar attention score for every word. The original word representations are averaged based on the attention scores to get the attentive feature vectors. A CNNextracted feature vector and two attentive feature vectors with respect to the two entities are concatenated and passed to a feed-forward layer with softmax to determine the relation.
(4) BiGRU Word Attention (BGWA) (Jat et al., 2017): Words are represented using word embeddings and two positional embeddings. They are passed to a bidirectional gated recurrent unit (Bi-GRU) (Cho et al., 2014) layer. Hidden vectors of the BiGRU layer are passed to a bilinear operator (a combination of two feed-forward layers)  (Zeng et al., 2014) 0.413 0.591 0.486 0.444 0.625 0.519 PCNN (Zeng et al., 2015) 0.380 0.642 0.477 0.446 0.679 0.538 † EA (Shen and Huang, 2016) Table 2: Performance comparison of different models on the two datasets. * denotes a statistically significant improvement over the previous best state-of-the-art model with p < 0.01 under the bootstrap paired t-test. † denotes the previous best state-of-the-art model. to compute a scalar attention score for each word. Hidden vectors of the BiGRU layer are multiplied by their corresponding attention scores. A piecewise CNN is applied on the weighted hidden vectors to obtain the feature vector. This feature vector is passed to a feed-forward layer with softmax to determine the relation.
(5) BiLSTM-CNN: This is our own baseline. Words are represented using word embeddings and entity indicator embeddings. They are passed to a bidirectional LSTM. Hidden representations of the LSTMs are concatenated with two positional embeddings. We use CNN and max-pooling on the concatenated representations to extract the feature vector. Also, we use CNN and maxpooling on the word embeddings and entity indicator embeddings of the context words of entities to obtain entity-specific features. These features are concatenated and passed to a feed-forward layer to determine the relation.

Experimental Results
We present the results of our final model on the relation extraction task on the two datasets in Table 2. Our model outperforms the previous stateof-the-art models on both datasets in terms of F1 score. On the NYT10 dataset, it achieves 4.3% higher F1 score compared to the previous best state-of-the-art model EA. Similarly, it achieves 3.3% higher F1 score compared to the previous best state-of-the-model PCNN on the NYT11 dataset. Our model improves the precision scores on both datasets with good recall scores. This will help to build a cleaner knowledge base with fewer false positives. We also show the precision-recall curves for the NYT10 and NYT11 datasets in Figures 3 and 4 respectively. The goal of any relation extraction system is to extract as many relations as possible with minimal false positives. If the recall score becomes very low, the coverage of the KB will be poor. From Figure 3, we observe that when the recall score is above 0.4, our model achieves higher precision than all the competing models on the NYT10 dataset. On the NYT11 dataset (Figure 4), when recall score is above 0.6, our model achieves higher precision than the competing models. Achieving higher precision with high recall score helps to build a cleaner KB with good coverage.

Varying the number of factors (m)
We investigate the effects of the multi-factor count (m) in our final model on the test datasets in Table 3. We observe that for the NYT10 dataset, m = {1, 2, 3} gives good performance with m = 1 achieving the highest F1 score. On the NYT11 dataset, m = 4 gives the best performance. These experiments show that the number of factors giving the best performance may vary depending on the underlying data distribution.

Effectiveness of Model Components
We include the ablation results on the NYT11 dataset in Table 4. When we add multi-factor attention to the baseline BiLSTM-CNN model without the dependency distance-based weight factor in the attention mechanism, we get 0.8% F1 score improvement (A2−A1). Adding the dependency weight factor with a window size of 5 improves  the F1 score by 3.2% (A3−A2). Increasing the window size to 10 reduces the F1 score marginally (A3−A4). Replacing the attention normalizing function with softmax operation also reduces the F1 score marginally (A3−A5). In our model, we concatenate the features extracted by each attention layer. Rather than concatenating them, we can apply max-pooling operation across the multiple attention scores to compute the final attention scores. These max-pooled attention scores are used to obtain the weighted average vector of Bi-LSTM hidden vectors. This affects the model performance negatively and F1 score of the model decreases by 3.0% (A3−A6).

Performance with Varying Sentence Length and Varying Entity Pair Distance
We analyze the effects of our attention model with different sentence lengths in the two datasets in Figures 5 and 6. We also analyze the effects of our attention model with different distances be-  tween the two entities in the two datasets in Figures 7 and 8. We observe that with increasing sentence length and increasing distance between the two entities, the performance of all models drops. This shows that finding the relation between entities located far from each other is a more difficult task. Our multi-factor attention model with dependency-distance weight factor increases the F1 score in all configurations when compared to previous state-of-the-art models on both datasets.

Related Work
Relation extraction from a distantly supervised dataset is an important task and many researchers (Mintz et al., 2009;Riedel et al., 2010;Hoffmann et al., 2011) tried to solve this task using feature-based classification models. Recently, Zeng et al. (2014Zeng et al. ( , 2015 used CNN models for this task which can extract features automatically. Shen and Huang (2016) and Jat et al. (2017) used attention mechanism in their model to improve performance. Surdeanu et al. (2012), Lin et al.  (2019) used multiple sentences in a multi-instance relation extraction setting to capture the features located in multiple sentences for a pair of entities. In their evaluation setting, they evaluated model performance by considering multiple sentences having the same pair of entities as a single test instance. On the other hand, our model and the previous models that we compare to in this paper (Zeng et al., 2014(Zeng et al., , 2015Shen and Huang, 2016;Jat et al., 2017) work on each sentence independently and are evaluated at the sentence level.
Since there may not be multiple sentences that contain a pair of entities, it is important to improve the task performance at the sentence level. Future work can explore the integration of our sentencelevel attention model in a multi-instance relation extraction framework. Not much previous research has exploited the dependency structure of a sentence in different ways for relation extraction. Xu et al. (2015) and Miwa and Bansal (2016) used an LSTM network and the shortest dependency path between two entities to find the relation between them. Huang et al. (2017) used the dependency structure of a sentence for the slot-filling task which is close to the relation extraction task.  exploited the shortest dependency path between two entities and the sub-trees attached to that path (augmented dependency path) for relation extraction. Zhang et al. (2018) and Guo et al. (2019) used graph convolution networks with pruned dependency tree structures for this task. In this work, we have incorporated the dependency distance of the words in a sentence from the two entities in a multi-factor attention mechanism to improve sentence-level relation extraction.
Attention-based neural networks are quite successful for many other NLP tasks. Bahdanau et al. (2015) and Luong et al. (2015) used attention models for neural machine translation, Seo et al. (2017) used attention mechanism for answer span extraction. Vaswani et al. (2017) and Kundu and Ng (2018) used multi-head or multi-factor attention models for machine translation and answer span extraction respectively. He et al. (2018) used dependency distance-focused word attention model for aspect-based sentiment analysis.

Conclusion
In this paper, we have proposed a multi-factor attention model utilizing syntactic structure for relation extraction. The syntactic structure component of our model helps to identify important words in a sentence and the multi-factor component helps to gather different pieces of evidence present in a sentence. Together, these two components improve the performance of our model on this task, and our model outperforms previous state-of-theart models when evaluated on the New York Times (NYT) corpus, achieving significantly higher F1 scores.