Non-Projective Dependency Parsing with Non-Local Transitions

We present a novel transition system, based on the Covington non-projective parser, introducing non-local transitions that can directly create arcs involving nodes to the left of the current focus positions. This avoids the need for long sequences of No-Arcs transitions to create long-distance arcs, thus alleviating error propagation. The resulting parser outperforms the original version and achieves the best accuracy on the Stanford Dependencies conversion of the Penn Treebank among greedy transition-based parsers.


Introduction
Greedy transition-based parsers are popular in NLP, as they provide competitive accuracy with high efficiency. They syntactically analyze a sentence by greedily applying transitions, which read it from left to right and produce a dependency tree.
However, this greedy process is prone to error propagation: one wrong choice of transition can lead the parser to an erroneous state, causing more incorrect decisions. This is especially crucial for long attachments requiring a larger number of transitions. In addition, transition-based parsers traditionally focus on only two words of the sentence and their local context to choose the next transition. The lack of a global perspective favors the presence of errors when creating arcs involving multiple transitions. As expected, transition-based parsers build short arcs more accurately than long ones .
Previous research such as (Fernández-González and Gómez-Rodríguez, 2012) and (Qi and Manning, 2017) proves that the widely-used projective arc-eager transition-based parser of Nivre (2003) benefits from shortening the length of transition sequences by creating non-local attachments. In particular, they augmented the original transition system with new actions whose behavior entails more than one arc-eager transition and involves a context beyond the traditional two focus words. Attardi (2006) and Sartorio et al. (2013) also extended the arc-standard transition-based algorithm (Nivre, 2004) with the same success.
In the same vein, we present a novel unrestricted non-projective transition system based on the well-known algorithm by Covington (2001) that shortens the transition sequence necessary to parse a given sentence by the original algorithm, which becomes linear instead of quadratic with respect to sentence length. To achieve that, we propose new transitions that affect non-local words and are equivalent to one or more Covington actions, in a similar way to the transitions defined by Qi and Manning (2017) based on the arc-eager parser. Experiments show that this novel variant significantly outperforms the original one in all datasets tested, and achieves the best reported accuracy for a greedy dependency parser on the Stanford Dependencies conversion of the WSJ Penn Treebank.

Non-Projective Covington Parser
The original non-projective parser defined by Covington (2001) was modelled under the transitionbased parsing framework by Nivre (2008). We only sketch this transition system briefly for space reasons, and refer to (Nivre, 2008) for details.
Parser configurations have the form c = λ 1 , λ 2 , B, A , where λ 1 and λ 2 are lists of partially processed words, B a list (called buffer) of unprocessed words, and A the set of dependency arcs built so far. Given an input string w 1 · · · w n , the parser starts at the initial configuration c s (w 1 . . . w n ) = [], [], [1 . . . n], ∅ and runs transitions until a terminal configuration of the 693 Covington: Figure 1: Transitions of the non-projective Covington (top) and NL-Covington (bottom) dependency parsers. The notation i → * j ∈ A means that there is a (possibly empty) directed path from i to j in A.
form λ 1 , λ 2 , [], A is reached: at that point, A contains the dependency graph for the input. 1 The set of transitions is shown in the top half of Figure 1. Their logic can be summarized as follows: when in a configuration of the form λ 1 |i, λ 2 , j|B, A , the parser has the chance to create a dependency involving words i and j, which we will call left and right focus words of that configuration. The Left-Arc and Right-Arc transitions are used to create a leftward (i ← j) or rightward arc (i → j), respectively, between these words, and also move i from λ 1 to the first position of λ 2 , effectively moving the focus to i − 1 and j. If no dependency is desired between the focus words, the No-Arc transition makes the same modification of λ 1 and λ 2 , but without building any arc. Finally, the Shift transition moves the whole content of the list λ 2 plus j to λ 1 when no more attachments are pending between j and the words of λ 1 , thus reading a new input word and placing the focus on j and j + 1. Transitions that create arcs are disallowed in configurations where this would violate the single-head or acyclicity constraints (cycles and nodes with multiple heads are not allowed in the dependency graph). Figure 3 shows the transition sequence in the Covington transition system which derives the dependency graph in Figure 2.
The resulting parser can generate arbitrary nonprojective trees, and its complexity is O(n 2 ).

Non-Projective NL-Covington Parser
The original logic described by Covington (2001) parses a sentence by systematically traversing 1 Note that, in general, A is a forest, but it can be converted to a tree by linking headless nodes as dependents of an artificial root node at position 0.   every pair of words. The Shift transition, introduced by Nivre (2008) in the transition-based version, is an optimization that avoids the need to apply a sequence of No-Arc transitions to empty the list λ 1 before reading a new input word. However, there are still situations where sequences of No-Arc transitions are needed. For example, if we are in a configuration C with focus words i and j and the next arc we need to create goes from j to i − k (k > 1), then we will need k − 1 consecutive No-Arc transitions to move the left focus word to i and then apply Left-Arc. This could be avoided if a non-local Left-Arc transition could be undertaken directly at C, creating the required arc and moving k words to λ 2 at once. The advantage of such approach would be twofold: (1) less risk of making a mistake at C due to considering a limited local context, and (2) shorter transition sequence, alleviating error propagation.
We present a novel transition system called NL-Covington (for "non-local Covington"), described in the bottom half of Figure 1. It consists in a modification of the non-projective Covington algorithm where: (1) the Left-Arc and Right-Arc transitions are parameterized with k, allowing the immediate creation of any attachment between j and the kth leftmost word in λ 1 and moving k words to λ 2 at once, and (2) the No-Arc transition is removed since it is no longer necessary. This new transition system can use some restricted global information to build non-local dependencies and, consequently, reduce the number of transitions needed to parse the input. For instance, as presented in Figure 4, the NL-Covington parser will need 9 transitions, instead of 12 traditional Covington actions, to analyze the sentence in Figure 2.
In fact, while in the standard Covington algorithm a transition sequence for a sentence of length n has length O(n 2 ) in the worst case (if all nodes are connected to the first node, then we need to traverse every node to the left of each right focus word); for NL-Covington the sequence length is always O(n): one Shift transition for each of the n words, plus one arc-building transition for each of the n − 1 arcs in the dependency tree. Note, however, that this does not affect the parser's time complexity, which is still quadratic as in the original Covington parser. This is because the algorithm has O(n) possible transitions to be scored at each configuration, while the original Covington has O(1) transitions due to being limited to creating local leftward/rightward arcs between the focus words.
The completeness and soundness of NL-Covington can easily be proved as there is a mapping between transition sequences of both parsers, where a sequence of k − 1 No-Arc and one arc transition in Covington is equivalent to a Left-Arc k or Right-Arc k in NL-Covington.  Figure 4: Transition sequence for parsing the sentence in Figure 2 using the NL-Covington parser (LA=LEFT-ARC, RA=RIGHT-ARC, SH=SHIFT).

Data and Evaluation
We use 9 datasets 2 from the CoNLL-X (Buchholz and Marsi, 2006) and all datasets from the CoNLL-XI shared task . To compare our system to the current state-of-theart transition-based parsers, we also evaluate it on the Stanford Dependencies ( We repeat each experiment with three independent random initializations and report the average accuracy. Statistical significance is assessed by a paired test with 10,000 bootstrap samples.

Model
To implement our approach we take advantage of the model architecture described in Qi and Manning (2017) for the arc-swift parser, which extends the architecture of Kiperwasser and Goldberg (2016) by applying a biaffine combination during the featurization process. We implement both the Covington and NL-Covington parsers under this architecture, adapt the featurization process with biaffine combination of Qi and Manning (2017) to these parsers, and use their same training  setup. More details about these model parameters are provided in Appendix A.
Since this architecture uses batch training, we train with a static oracle. The NL-Covington algorithm has no spurious ambiguity at all, so there is only one possible static oracle: canonical transition sequences are generated by choosing the transition that builds the shortest pending gold arc involving the current right focus word j, or Shift if there are no unbuilt gold arcs involving j.
We note that a dynamic oracle can be obtained for the NL-Covington parser by adapting the one for standard Covington of Gómez-Rodríguez and Fernández-González (2015). As NL-Covington transitions are concatenations of Covington ones, their loss calculation algorithm is compatible with NL-Covington. Apart from error exploration, this also opens the way to incorporating nonmonotonicity (Fernández-González and Gómez-Rodríguez, 2017). While these approaches have shown to improve accuracy under online training settings, here we prioritize homogeneous comparability to (Qi and Manning, 2017), so we use batch training and a static oracle, and still obtain stateof-the-art accuracy for a greedy parser.
The "Type" column shows the type of parser: gs is a greedy parser trained with a static oracle, gd a greedy parser trained with a dynamic oracle, b(n) a beam search parser with beam size n, dp a parser that employs global training with dynamic programming, and c a constituent parser with conversion to dependencies.

Results
Table 1 presents a comparison between the Covington parser and the novel variant developed here. The NL-Covington parser outperforms the original version in all datasets tested, with all improvements statistically significant (α = .05). Table 2 compares our novel system with other state-of-the-art transition-based dependency parsers on the PT-SD. Greedy parsers are in the first block, beam-search and dynamic programming parsers in the second block. The third block shows the best result on this benchmark, obtained with constituent parsing with generative re-ranking and conversion to dependencies. Despite being the only non-projective parser tested on a practically projective dataset, 4 our parser achieves the highest score among greedy transition-based models (even above those trained with a dynamic oracle).
We even slightly outperform the arc-swift system of Qi and Manning (2017), with the same model architecture, implementation and training setup, but based on the projective arc-eager transition-based parser instead. This may be because our system takes into consideration any permissible attachment between the focus word j and any word in λ 1 at each configuration, while their approach is limited by the arc-eager logic: it al-  lows all possible rightward arcs (possibly fewer than our approach as the arc-eager stack usually contains a small number of words), but only one leftward arc is permitted per parser state. It is also worth noting that the arc-swift and NL-Covington parsers have the same worst-case time complexity, (O(n 2 )), as adding non-local arc transitions to the arc-eager parser increases its complexity from linear to quadratic, but it does not affect the complexity of the Covington algorithm. Thus, it can be argued that this technique is better suited to Covington than to arc-eager parsing.
We also compare NL-Covington to the arcswift parser on the CoNLL datasets (Table 3). For fairness of comparison, we projectivize (via maltparser 5 ) all training datasets, instead of filtering non-projective sentences, as some of the languages are significantly non-projective. Even doing that, the NL-Covington parser improves over the arc-swift system in terms of UAS in 14 out of 19 datasets, obtaining statistically significant improvements in accuracy on 7 of them, and statistically significant decreases in just one.
Finally, we analyze how our approach reduces the length of the transition sequence consumed by 5 http://www.maltparser.org/  the original Covington parser. In Table 4 we report the transition sequence length per sentence used by the Covington and the NL-Covington algorithms to analyze each dataset from the same benchmark used for evaluating parsing accuracy. As seen in the table, NL-Covington produces notably shorter transition sequences than Covington, with a reduction close to 50% on average.

Conclusion
We present a novel variant of the non-projective Covington transition-based parser by incorporating non-local transitions, reducing the length of transition sequences from O(n 2 ) to O(n). This system clearly outperforms the original Covington parser and achieves the highest accuracy on the WSJ Penn Treebank (Stanford Dependencies) obtained to date with greedy dependency parsing.
scores and then the comparison would be unfair in our case. We implement both systems under the same framework, with the original Covington parser represented as the NL-Covington system plus the No-Arc transition and with k limited to 1. A thorough description of the model architecture and featurization mechanism can be found in Qi and Manning (2017). Our training setup is exactly the same used by Qi and Manning (2017), training the models during 10 epochs for large datasets and 30 for small ones. In addition, we initialize word embeddings with 100-dimensional GloVe vectors (Pennington et al., 2014) for English and use 300-dimensional Facebook vectors (Bojanowski et al., 2016) for other languages. The other parameters of the neural network keep the same values.