Optimal Shift-Reduce Constituent Parsing with Structured Perceptron

We present a constituent shift-reduce parser with a structured perceptron that ﬁnds the optimal parse in a practical run-time. The key ideas are new feature templates that facilitate state merging of dynamic programming and A* search. Our system achieves 91.1 F1 on a standard English experiment, a level which cannot be reached by other beam-based systems even with large beam sizes. 1


Introduction
A parsing system comprises two components: a scoring model for a tree and a search algorithm. In shift-reduce parsing, the focus of most previous studies has been the former, typically by enriching feature templates, while the search quality has often been taken less seriously. For example, the current state-of-the-art parsers for constituency (Zhu et al., 2013;Wang and Xue, 2014) and dependency (Bohnet et al., 2013) both employ beam search with a constant beam size, which may suffer from severe search errors. This is contrary to ordinary PCFG parsing which, while it often uses some approximations, has nearly optimal quality (Petrov and Klein, 2007).
In this paper, we instead investigate the question of whether we can obtain a practical shift-reduce parser with state-of-the-art accuracy by focusing on optimal search quality like PCFG parsing. We base our system on best-first search for shiftreduce parsing formulated in Zhao et al. (2013), but it differs from their approach in two points. First, we focus on constituent parsing while they use dependency grammar. Second, and more crucially, they use a locally trained MaxEnt model, which is simple but not strong, while we explore a structured perceptron, the current state-of-the-art in shift-reduce parsing (Zhu et al., 2013).
As we will see, this model change makes search quite hard, which motivates us to invent new feature templates as well as to improve the search algorithm. In existing parsers, features are commonly exploited from the parsing history, such as the top k elements on the stack. However, such features are expensive in terms of search efficiency. Instead of relying on features primarily from the stack, our features mostly come from the span of the top few nodes, an idea inspired by the recent empirical success in CRF parsing (Hall et al., 2014). We show that these span features also fit quite well in the shift-reduce system and lead to state-of-the-art accuracy. We further improve search with new A* heuristics that make optimal search for shift-reduce parsers with a structured perceptron tractable for the first time.
The primary contribution of this paper is to demonstrate the effectiveness and the practicality of optimal search for shift-reduce parsing, especially when combined with appropriate features and efficient search. In English Penn Treebank experiments, our parser achieves an F1 score of 91.1 on test set at a speed of 13.6 sentences per second. This score is in excess of that of a beam-based system with larger beam size and same speed.

Shift-Reduce Constituent Parsing
We first introduce the shift-reduce algorithm for constituent structures. For space reasons, our exposition is rather informal; See Zhang and Clark (2009) for details. A shift-reduce parser parses a sentence through transitions between states, each of which consists of two data structures of a stack and a queue. The stack preserves intermediate parse results, while the queue saves unprocessed tokens. At each step, a parser selects an action, which changes the current state into the new one. For example, SHIFT pops the front word from the queue and pushes it onto the stack, while RE-DUCE(X) combines the top two elements on the stack into their parent. 2 For example, if the top two elements on the stack are DT and NN, RE-DUCE(NP) combines these by applying the CFG rule NP → DT NN.
Unary Action The actions above are essentially the same as those in shift-reduce dependency parsing (Nivre, 2008), but a special action for constituent parsing UNARY(X) complicates the system and search. For example, if the top element on the stack is NN, UNARY(NP) changes it to NP by applying the rule NP → NN. In particular, this causes inconsistency in the numbers of actions between derivations (Zhu et al., 2013), which makes it hard to apply the existing best first search for dependency grammar to our system. We revisit this problem in Section 3.1.
Model The model of a shift-reduce parser gives a score to each derivation, i.e., an action sequence a = (a 1 , · · · , a |a| ), in which each a i is a shift or reduce action. Let p = (p 1 , · · · , p |a| ) be the sequence of states, where p i is the state after applying a i to p i−1 . p 0 is the initial state for input sentence w. Then, the score for a derivation Φ(a) is calculated as the total score of every action: There are two well-known models, in which the crucial difference is in training criteria. The Max-Ent model is trained locally to select the correct action at each step. It assigns a probability for each action a i as where θ and f (a, p) are weight and feature vectors, respectively. Note that the probability of an action sequence a under this model is the product of local probabilities, though we can cast the total score in summation form (1) by using the log of (2) as a local score φ(a i , p i−1 ). The structured perceptron is instead trained globally to select the correct action sequence given an input sentence. It does not use probability and the local score is just φ(a i , p i−1 ) = θ f (a i , p i−1 ). In practice, this global model is much stronger than the local MaxEnt model. However, training this model without any approximation is hard, and the common practice is to rely on well-known heuristics such as an early update with beam search (Collins and Roark, 2004). We are not aware of any previous study that succeeded in training a structured perceptron for parsing without approximation. We will show how this becomes possible in Section 3.

Previous Best-First Shift-Reduce Parsing
The basic idea behind best-first search (BFS) for shift-reduce parsing is assuming each parser state as a node on a graph and then searching for the minimal cost path from a start state (node) to the final state. This is the idea of Sagae and Lavie (2006), and it was later refined by Zhao et al. (2013). BFS gives a priority to each state, and a state with the highest priority (lowest cost) is always processed first. BFS guarantees that the first found goal is the best (optimality) if the superiority condition is satisfied: a state never has a lower cost than the costs of its previous states.
Though the found parse is guaranteed to be optimal, in practice, current BFS-based systems are not stronger than other systems with approximate search (Zhu et al., 2013;Wang and Xue, 2014) since all existing systems are based on the MaxEnt model. With this model, the speriority can easily be accomplished by using the negative log of (2), which is always positive and becomes smaller with higher probability. We focus instead on the structured perceptron, but achieving superiority with this model is not trivial. We resolve this problem in Section 3.1.
In addition to the mathematical convenience, the MaxEnt model itself helps search. Sagae and Lavie ascribe the empirical success of their BFS to the sparseness of the distribution over subsequent actions in the MaxEnt model. In other words, BFS is very efficient when only a few actions have dominant probabilities in each step, and the Max-Ent model facilitates this with its exponential operation (2). Unfortunately, this is not the case in our global structured perceptron because the score of each action is just the sum of the feature weights. Resolving this search difficulty is the central problem of this paper; we illustrate this problem in Section 4 and resolve it in Section 5. Zhao et al. (2013) The worst time complexity of BFS in Sagae and Lavie (2006) is exponential. For dependency parsing, Zhao et al. (2013) reduce it to polynomial by converting the search graph into a hypergraph by using the state merging technique of Huang and Sagae (2010). This hypergraph search is the basis of our parser, so we will briefly review it here.

Hypergraph Search of
The algorithm is closely related to agendabased best-first parsing algorithms for PCFGs (Klein and Manning, 2001;Pauls and Klein, 2009). As in those algorithms, it maintains two data structures: a chart C that preserves processed states as well as a priority queue (agenda) Q. The difference is in the basic items processed in C and Q. In PCFG parsing, they are spans. Each span abstracts many derivations on that span and the chart maps a span to the best (lowest cost) derivation found so far. In shift-reduce parsing, the basic items are not spans but states, i.e., partial representations of the stack. 3 We denote p = i, j, s d ...s 0 where s i is the i-th top subtree on the stack and s 0 spans i to j. We extract features from s d ...s 0 . Note that d is constant and a state usually does not contain full information about a derivation. In fact, it only keeps atomic features, the minimal information on the stack necessary to recover the full features and packs many derivations. The chart maps a state to the current best derivation. For example, if we extract features only from the root symbol of s 0 , each state looks the same as a span of PCFGs.
Differently from the original shift-reduce algorithm, during this search, reduce actions are defined between two states p and q. The basic operation of the algorithm is to pop the best (top) state p from the queue, push it into the chart, and then enqueue every state that can be obtained by a reduce action between p and other states in the chart or a shift action from p. The left states L(p) and right states R(p) are important concepts. L(p) is a set of states in the chart, with which p can reduce from the right side. Formally, where f k (·) returns atomic features on the k-th top node. See Figure 4 for how they look like in constituent parsing. R(p) is defined similarly; p can 3 Although Zhao et al. (2013) explained that the items in Q are derivations (not states), we can implement Q as a set of states by keeping backpointers in a starndard way. reduce q ∈ R(p) from the left side. When p is popped, it searches for every L(p) and R(p) in the chart and tries to expand the current derivation.
The priority for each state is a pair (c, v). c is the prefix cost that is the total cost to reach that state, while v is the inside cost, a cost to build the top node s 0 . The top state in the queue has the lowest prefix cost, or the lowest inside cost if the two prefix costs are the same.

Best-First Shift-Reduce Constituent Parsing with Structured Perceptron
This section describes our basic parsing system, i.e., shift-reduce constituent parsing with BFS and the structured perceptron. We have to solve two problems. The first is how to achieve BFS with the structured perceptron, and the second is how to apply that BFS to constituent parsing. Interestingly, the solution to the first problem makes the second problem relatively trivial.

Superiority of Structured Perceptron
We must design each priority of a state to satisfy the superiority condition is the usual local score employed in structured perceptrons (Huang and Sagae, 2010) but we cannot use it as a local cost for two reasons. First, in our system, the best parse should have the lowest cost; it is opposite in the ordinary setting (Collins, 2002). We can resolve this conflict by changing the direction of structured perceptron training so that the best parse has the lowest score. 4 Second, each φ(a i , p i−1 ) can take a negative value but the cost should always be positive. This is in contrast to the MaxEnt model in which the negative log probability is always positive. Our strategy is to add a constant offset δ to every local cost. If δ is large enough so that every score is positive, the superiority condition is satisfied. 5 Unary Merging Though this technique solves the problem with the structured perceptron for a simpler shift-reduce system, say for dependency grammar, the existence of unary actions, as mentioned in Section 2.1, requires additional effort in order to apply it to constituent parsing. In particular, constituent parsing takes different numbers of SH state p: q ∈ L(p) Figure 1: The deductive system of our best-first shift-reduce constituent parsing explaining how the prefix cost and inside cost are calculated. FIN is omitted. | on the stack means an append operation and a(b) means a subtree a → b. t j is the POS tag of j-th token while w j is the surface form. c a (p) is the cost for an action a of which features are extracted from p. Each c a (p) implicitly includes an offset δ.
actions for each derivation, which means that the scores of two final states may contain different offset values. The existing modification to alleviate this inconsistency (Zhu et al., 2013) cannot be applied here because it is designed for beam search. We instead develop a new transition system, in which the number of actions to reach the final state is always 2n (n is the length of sentence). The basic idea is merging a unary action into each shift or reduce action. Our system uses five actions: • SH: original shift action; • SHU(X): shift a node, then immediately apply a unary rule to that node; • RE(X): original reduce action; • REU(Y, X): do reduce to X first, then immediately apply an unary rule Y → X to it; • FIN: finish the process.
Though the system cannot perform consecutive unary actions, in practice it can generate any unary chains as long as those in the training corpus by collapsing a chain into one rule. We preprocess the corpus in this way along with binarization (See Section 4). Note that this system is quite similar to the transition system for dependency parsing. The only changes are that we have several varieties of shift and reduce actions. This modification also makes it easy to apply an algorithm developed for dependency parsing to constituent parsing, such as dynamic programming with beam search (Huang and Sagae, 2010), which has not been applied into constituent parsing until quite recently (Mi and Huang, 2015) (See Section 7).

BFS with Dynamic Programming
Now applying BFS of Zhao et al. (2013) for dependency parsing into constituent parsing is not hard. Figure 1 shows the deductive system of dynamic programming, which is much similar to that in dependency parsing. One important change is that we include a cost for a shift (SH or SHU) action in the prefix cost in a shift step, not a reduce step as in Zhao et al. (2013), since it is unknown whether the top node s 0 of a state p is instantiated with SH or SHU. This modification keeps the correctness of the algorithm and has been employed in another system (Kuhlmann et al., 2011). The algorithm is also slightly changed. We show only the difference from Zhao et al. (2013) (Algorithm 1) in Algorithm 1. shu(x) is a function which returns the set of states that can be arrived at by possible SHU rules applied to the state x. re(x, y) and reu(x, y) are similar, and they return the set of states arrived at through one of RE or REU actions. As a speed up, we can apply a lazy expansion technique (we do so in our experiment). Another difference is in training. The previous best-first shift-reduce parsers are all trained in the same way as a parser with greedy search since the model is local MaxEnt. In our case, we can use structured perceptron training with exact search (Collins, 2002); that is, at each iteration for each sentence, we find the current argmin derivation with BFS, then update the parameters if it differs from the gold derivation. Note that at the beginning of training, BFS is inefficient due to the initial flat parameters. We use a heuristic to speed up this process: For a few iterations (five, in our case), we train the model with beam search and an early update (Collins and Roark, 2004). We find that this approximation does not affect the performance, while it greatly reduces the training time.

Evaluation of Best-First Shift-Reduce Constituent Parsing
This section evaluates the empirical performance of our best-first constituent parser that we built in the previous section. As mentioned in Section 2.2, the previous empirical success of best-first shiftreduce parsers might be due to the sparsity property of the MaxEnt model, which may not hold true in the structured perceptron. We investigate the validity of this assumption by comparing two systems, a locally trained MaxEnt model and a globally trained structured perceptron.
Setting We follow the standard practice and train each model on section 2-21 of the WSJ Penn Treebank (Marcus et al., 1993), which is binarized using the algorithm in Zhang and Clark (2009) with the head rule of Collins (1999). We report the F1 scores for the development set of section 22. The Stanford POS tagger is used for part-ofspeech tagging. 6 We used the EVALB program to evaluate parsing performance. 7 Every experiment reported here was performed on hardware Feature We borrow the feature templates from Sagae and Lavie (2006). However, we found the full feature templates make training and decoding of the structured perceptron much slower, and instead developed simplified templates by removing some, e.g., that access to the child information on the second top node on the stack. 8 Result Table 1 summarizes the results that indicate our assumption is true. The structured perceptron has the best score even though we restrict the features. However, its parsing speed is much slower than that of the local MaxEnt model. To see the difference in search behaviors between the two models, Figure 2 plots the number of processed (popped) states during search.
Discussion This result may seem somewhat depressing. We have devised a new method that enables optimal search for the structured perceptron, but it cannot handle even modestly large feature templates. As we will see below, the time complexity of the system depends on the used features. We have tried features from Sagae and Lavie (2006), but their features are no longer state-ofthe-art. For example, Zhu et al. (2013) report higher scores by using beam search with much richer feature templates, though, as we have examined, it seems implausible to apply such features to our system. In the following, we find a practical solution for improving both parse accuracy and search efficiency in our system. We will see that our new features not only make BFS tractable, but also lead to comparable or even superior accuracy relative to the current mainstream features. When 2 Figure 3: A snippet of the hypergraph for the system that simulates a simple PCFG. p is the popped state, which is being expanded with a state of its left states L(p) using a reduce rule.
it is combined with A* search, the speed reaches a practical level.

Span Features
The worst time complexity of hypergraph search for shift-reduce parsing can be analyzed with the deduction rule of the reduce step. Figure 3 shows an example. In this case, the time complexity is O(n 3 · |G| · |N |) since there are three indices (i, j, k) and four nonterminals (A, B, C, D), on which three comprise a rule. The extra factor |N | compared with ordinary CKY parsing comes from the restriction that we extract features only from one state (Huang and Sagae, 2010).
Complexity increases when we add new atomic features to each state. For example, if we lexicalize this model by adding features that depend on the head indices of s 0 and/or s 1 , it increases to O(n 6 · |G| · |N |) since we have to maintain three head indices of A, B, and C. This is why Sagae and Lavie's features are too expensive for our system; they rely on head indices of s 0 , s 1 , s 2 , s 3 , the left and right children of s 0 and s 1 , and so on, leading prohibitively huge complexity. Historically speaking, the success of shift-reduce approach in constituent parsing has been led by its success in dependency parsing (Nivre, 2008), in which the head is the primary element, and we suspect this is the reason why the current constituent shift-reduce parsers mainly rely on deeper stack elements and their heads.
The features we propose here are extracted from fundamentally different parts from these recent trends. Figure 4 explains how we extract atomic features from a state and Table 2 shows the full list of feature templates. Our system is unlexicalized;  Figure 4: Atomic features of our system largely come from the span of a constituency. For each span (s 0 and s 1 ), we extract the surface form and POS tag of the preceding word (bw, bt), the first word (fw, ft), the last word (lw, lt), and the subsequent word (aw, at). shape is the same as that in Hall et al. (2014). Bold symbols are additional information from the system of Figure 3. The time complexity is O(n 4 · |G| 3 · |N |). Table 2: All feature templates in our span model. See Figure 4 for a description of each element. q i is the i-th top token on the queue.
i.e., it does not use any head indices. This feature design is largely inspired by the recent empirical success of span features in CRF parsing (Hall et al., 2014). Their main finding is that the surface information on a subtree, such as the first or the last word of a span, has essentially the same amount of information as its head. For our system, such span features are much cheaper, so we expect they would facilitate our dynamic programming without sacrificing accuracy.
We customize their features for fitting in the shift-reduce framework. Unlike the usual setting of PCFG parsing, shift-reduce parsers receive a POS-tagged sentence as input, so we use both the POS tag and surface form for each word on the span. One difficult part is using features with an applied rule. We include this feature by memoriz-ing the previously applied rule for each span (subtree). This is a bit costly, because it means we have to preserve labels of the left and right children for each node, which lead to an additional |G| 2 factor of complexity. However, we will see that this problem can be alleviated by our heuristic cost functions in A* search described below.

A* Search
We now explain our A* search, another key technique for speeding up our search. To our knowledge, this is the first work to successfully apply A* search to shift-reduce parsing.
A* parsing (Klein and Manning, 2003a) modifies the calculation of priority σ(p i ) for state p i . In BFS, it is basically the prefix cost, the sum of every local cost (Section 3.1), which we denote as β p i : is a heuristic cost. β p i corresponds to the Viterbi inside cost of PCFG parsing (Klein and Manning, 2003a) while h(p i ) is the Viterbi outside cost, an approximation of the cost for the future best path (action sequence) from p i . h(p i ) must be a lower bound of the true Viterbi outside cost. In PCFG parsing, this is often achieved with a technique called projection. Let G * be a projected, or relaxed, grammar of the original G; then, a rule weight in the relaxed grammar w r * will become w r * = min r∈G:π(r)=r * w r , where π(r) is a projection function which returns the set of rules that correspond to r in G * .
In feature-based shift-reduce parsing, a rule weight corresponds to the sum of feature weights for an action a, that is, φ(a, p i ) = θ f (a, p i ). We calculate h(p i ) with a relaxed feature function φ * (a, p i ), which always returns a lower bound: Note that we only have to modify the weight vector. If a relaxed weight satisfies θ * (k) ≤ θ(k) for all k, that projection is correct.
Our A* parsing is essentially hierarchical A* parsing (Pauls and Klein, 2009), and we calculate a heuristic cost h(p) on the fly using another chart for the relaxed space when a new state p is pushed into the priority queue. Below we introduce two different projection methods, which are orthogonal and later combined hierarchically.  Table 3: Example of our feature projection. θ GP is a weight vector with the GP, which collapses every c. θ LF is with the LF, which collapses all elements in Table 4. s 1 .c s 1 .ft s 1 .fw s 1 .bt s 1 .bw s 1 .len s 1 .shape s 1 .rule s 0 .rule Table 4: List of feature elements ignored in the LF.
Grammar Projection (GP) Our first projection borrows the idea from the filter projection of Klein and Manning (2003a), in which the grammar symbols (nonterminals) are collapsed into a single label X. Our projection, however, does not collapse all the labels into X; instead, we utilize constituent labels in level 2 from , in which labels that tend to be head, such as S or VP are collapsed into HP and others are collapsed into MP. θ G in Table 3 is an example of how feature weights are relaxed with this projection. Here we show each feature as a tuple including action name (a). Let π GP be a feature projection function: e.g., Formally, for k-th feature, the weight θ GP (k) is determined by minimizing over the features collapsed by π GP : where g k is the value of the k-th feature.
Less-Feature Projection (LF) The basic idea of our second projection is to ignore some of the atomic features in a feature template so that we can reduce the time complexity for computing the heuristics. We apply this technique to the feature elements in Table 4. We can do so by not filling in the actual value in each feature template: e.g., The elements in Table 4 are selected so that all bold elements in Figure 4 would be eliminated; the complexity is O(n 3 · |G| · |N |). In practice, this is still expensive. However, we note that the effects of these two heuristics are complementary: The LF reduces complexity to a cubic time bound, while the GP greatly reduces the size of grammar |G|; We combine these two ideas below.
Hierarchical Projection (HP) The basic idea of this combined projection is to use the heuristics given by the GP to lead search of the LF. This is similar to the hierarchical A* for PCFGs with multilevel symbol refinements (Pauls and Klein, 2009). The difference is that their hierarchy is on the grammar symbols while our projection targets are features. When a state p is created, its heuristic score h(p) is calculated with the LF, which requires search for the outside cost in the space of the LF, but its worst time complexity is cubic. The GP is used to guide this search. For each state p LF in the space of the LF, the GP calculates the heuristic score. We will see that this combination works quite well in practice in the next section.

Experiment
We build our final system by combining the ideas in Section 5 and the system in Section 3. We also build beam-based systems with or without dynamic programming (DP) and with the ordinary or the new span features. All systems are trained with the structured perceptron. We use the early update for training beam-based systems. Figure 5 shows the effects of A* heuristics. In terms of search quality, the LF is better; it prunes 92.5% of states compared to naive BFS, while the GP prunes 75%. However, the LF takes more time to calculate   Zhang and Clark (2009). The speeds of non-DP and DP are the same, so we omit them from the comparison.

Effect of A* heuristics
heuristics than the GP. The HP combines the advantages of both, achieving the best result.
Accuracy and Speed The F1 scores for the development set are summarized in Table 5. We can see that the systems with our new feature (span) perform surprisingly well, at a competitive level with the more expensive features of Zhang and Clark (2009) (Z&C). This is particularly true with DP; it sometimes outperforms Z&C, probably because our simple features facilitate state merging of DP, which expands search space. However, our main result that the system with optimal search gets a much higher score (90.7 F1) than beambased systems with a larger beam size (90.2 F1) indicates that ordinary beam-based systems suffer from severe search errors even with the help of DP. Though our naive BFS is slow (1.12 sent./s.), A* search considerably improves parsing speed (13.6 sent./s.), and is faster than the beam-based system with a beam size of 64 ( Figure 6).

Unary Merging
We have not mentioned the effect of our unary merging (Section 3), but the result indicates it has almost the same effect as the previously proposed padding method (Zhu et al.,   Zhu et al., 2013), other chart-based systems (Petrov and Klein, 2007;Socher et al., 2013), and the systems with external semi supervised features or reranking (Charniak and Johnson, 2005;McClosky et al., 2006;Zhu et al., 2013).
2013). The score with the non-DP beam size = 16 and Z&C (89.1 F1) is the same as that reported in their paper (the features are the same). Table 6 compares our parsing system with those of previous studies. When we look at closed settings, where no external resource other than the training Penn Treebank is used, our system outperforms all other systems including the Berkeley parser (Petrov and Klein, 2007) and the Stanford parser (Socher et al., 2013) in terms of F1. The parsing systems with external features or reranking outperform our system. However, it should be noted that our system could also be improved by external features. For example, the feature of type-level distributional similarity, such as Brown clustering (Brown et al., 1992), can be incorporated with our system without changing the theoretical runtime.

Related Work and Discussion
Though the framework is shift-reduce, we can notice that our system is strikingly similar to the CKY-based discriminative parser (Hall et al., 2014) because our features basically come from two nodes on the stack and their spans. From this viewpoint, it is interesting to see that our system outperforms theirs by a large margin ( Figure 6). Identifying the source of this performance change is beyond the scope of this paper, but we believe this is an important question for future parsing research. For example, it is interesting to see whether there is any structural advantage for shiftreduce over CKY by comparing two systems with exactly the same feature set. As shown in Section 4, the previous optimal parser on shift-reduce (Sagae and Lavie, 2006) was not so strong because of the locality of the model. Other optimal parsing systems are often based on relatively simple PCFGs, such as unlexicalized grammar (Klein and Manning, 2003b) or factored lexicalized grammar (Klein and Manning, 2003c) in which A* heuristics from the unlexicalized grammar guide search. However, those systems are not state-of-the-art probably due to the limited context captured with a simple PCFG. A recent trend has thus been extending the context of each rule (Petrov and Klein, 2007;Socher et al., 2013), but the resulting complex grammars make exact search intractable. In our system, the main source of information comes from spans as in CRF parsing. This is cheap yet strong, and leads to a fast and accurate parsing system with optimality.
Concurrently with this work, Mi and Huang (2015) have developed another dynamic programming for constituent shift-reduce parsing by keeping the step size for a sentence to 4n − 2, instead of 2n, with an un-unary (stay) action. Their final score is 90.8 F1 on WSJ. Though they only experiment with beam-search, it is possible to build BFS with their transition system as well.

Conclusions
To date, all practical shift-reduce parsers have relied on approximate search, which suffers from search errors but also allows to utilize unlimited features. The main result of this paper is to show another possibility of shift-reduce by proceeding in an opposite direction: By selecting features and improving search efficiency, a shift-reduce parser with provable search optimality is able to find very high quality parses in a practical runtime.