What is Learned in Visually Grounded Neural Syntax Acquisition

Visual features are a promising signal for learning bootstrap textual models. However, blackbox learning models make it difficult to isolate the specific contribution of visual components. In this analysis, we consider the case study of the Visually Grounded Neural Syntax Learner (Shi et al., 2019), a recent approach for learning syntax from a visual training signal. By constructing simplified versions of the model, we isolate the core factors that yield the model’s strong performance. Contrary to what the model might be capable of learning, we find significantly less expressive versions produce similar predictions and perform just as well, or even better. We also find that a simple lexical signal of noun concreteness plays the main role in the model’s predictions as opposed to more complex syntactic reasoning.


Introduction
Language analysis within visual contexts has been studied extensively, including for instruction following (e.g., Anderson et al., 2018b;Misra et al., 2017Misra et al., , 2018Blukis et al., 2018Blukis et al., , 2019, visual question answering (e.g., Fukui et al., 2016;Hu et al., 2017;Anderson et al., 2018a), and referring expression resolution (e.g., Mao et al., 2016;Yu et al., 2016;Wang et al., 2016). While significant progress has been made on such tasks, the combination of vision and language makes it particularly difficult to identify what information is extracted from the visual context and how it contributes to the language understanding problem.
Recently, Shi et al. (2019) proposed using alignments between phrases and images as a learning signal for syntax acquisition. This task has been long-studied from a text-only setting, including recently using deep learning based approaches (Shen et al., 2018a(Shen et al., , 2019Kim et al., 2019;Havrylov et al., 2019;Drozdov et al., 2019, inter alia). While the introduction of images provides a rich new signal for the task, it also introduces numerous challenges, such as identifying objects and analyzing scenes.
In this paper, we analyze the Visually Grounded Neural Syntax Learner (VG-NSL) model of Shi et al. (2019). In contrast to the tasks commonly studied in the intersection of vision and language, the existence of an underlying syntactic formalism allows for careful study of the contribution of the visual signal. We identify the key components of the model and design several alternatives to reduce the expressivity of the model, at times, even replacing them with simple non-parameterized rules. This allows us to create several model variants, compare them with the full VG-NSL model, and visualize the information captured by the model parameters.
Broadly, while we would expect a parsing model to distinguish between tokens and phrases along multiple dimensions to represent different syntactic roles, we observe that the model likely does not capture such information. Our experiments show that significantly less expressive models, which are unable to capture such distinctions, learn a similar model of parsing and perform equally and even better than the original VG-NSL model. Our visualizations illustrate that the model is largely focused on acquiring a notion of noun concreteness optimized for the training data, rather than identifying higher-level syntactic roles. Our code and experiment logs are available at https://github. com/lil-lab/vgnsl_analysis.

Background: VG-NSL
VG-NSL consists of a greedy bottom-up parser made of three components: a token embedding function (φ), a phrase combination function (combine), and a decision scoring function (score). The model is trained using a reward signal computed by matching constituents and images.
Algorithm 1 VG-NSL greedy bottom-up parser Input: A sentencex = x1, . . . , xn . Definitions: φ(·) is a token embedding function; combine (·) and score(·) are learned functions defined in Section 2. 1: C, T ← {[i, i]} n i=1 2: x [i,i] ← φ(xi) ∀i = 1, . . . , n 3: while [1, n] / ∈ T do 4: i, k, j = argmax Given a sentencex with n tokens x 1 , . . . , x n , the VG-NSL parser (Algorithm 1) greedily constructs a parse tree by building up a set of constituent spans T , which are combined spans from a candidate set C. Parsing starts by initializing the candidate set C with all single-token spans. At each step, a score is computed for each pair of adjacent candidate spans [i, k] and [k + 1, j]. The best span [i, j] is added to T and C, and the two sub-spans are removed from C. The parser continues until the complete span [1, n] is added to T .
Scoring a span [i, j] uses its span embedding x [i,j] . First, a d-dimensional embedding for each single-token span is computed using φ. At each step, the score of all potential new spans [i, j] are computed from the candidate embeddings x [i,k] and x [k+1,j] . The VG-NSL scoring function is: where MLP s is a two-layer feed-forward network. Once the best new span is found, its span embedding is computed using a deterministic combine function. VG-NSL computes the d-dimensional embedding of the span [i, j] as the L2-normalized sum of the two combined sub-spans: Learning the token embedding function φ and scoring model MLP s relies on a visual signal from aligned images via a reward signal derived from matching constituents and the image. The process alternates between updating the parser parameters and an external visual matching function, which is estimated by optimizing a hinge-based triplet ranking loss similar to the image-caption retrieval loss of Kiros et al. (2014). The parser parameters are estimated using a policy gradient method based on the learned visual matching function, which encourages constituents that match with the corresponding image. This visual signal is the only objective used to learn the parser parameters. After training, the images are no longer used and the parser is text-only.

Model Variations
We consider varying the parameterization of VG-NSL, i.e., φ, combine, and score, while keeping the same inference algorithm and learning procedure. Our goal is to constrain model expressivity, while studying its performance and outputs.
Embedding Bottleneck We limit the information capacity of the parsing model by drastically reducing its dimensionality from d = 512 to 1 or 2. We reduce dimensionality by wrapping the token embedding function with a bottleneck layer φ B (x) = MLP B (φ(x)), where MLP B is a twolayer feed-forward network mapping to the reduced size. This bottleneck limits the expressiveness of phrase embeddings throughout the parsing algorithm. During training, we compute both original and reduced embeddings. The original embeddings are used to compute the visual matching reward signal, whereas the reduced embeddings are used by score to determine parsing decisions. At test time, only the reduced embeddings are used. In the case of d = 1, the model is reduced to using a single criteria. The low dimensional embeddings are also easy to visualize, and to characterize the type of information learned.
Simplified Scoring We experiment with simplified versions of the score function. Together with the lower-dimensional representation, this enables controlling and analyzing the type of decisions the parser is capable of. As we control the information the embeddings can capture, simplifying the scoring function makes sure it does not introduce additional expressivity. The first variation uses a weighted sum with parameters u, v: This formulation allows the model to learn structural biases, such as the head-initial (HI) bias common in English (Baker, 1987). The second is a nonparameterized mean, applicable for d = 1 only: where τ is a hyper-parameter that enables upweighting the right constituent to induce a HI inductive bias. We experiment with unbiased τ = 1 (score M ) and HI-biased τ = 20 (score MHI ) scoring.

Reduced Dimension Combine
In lower dimensions, the combine function no longer produces useful outputs, i.e., in d = 1 it always gives 1 or −1. We therefore consider mean or max pooling: The mean variant computes the representation of a new span as an equal mixture of the two subspans, while the max directly copies to the new span representation information only from one of the spans. The max function is similar to how head rules lexicalize parsers (Collins, 1996). We evaluate unsupervised constituency parsing performance using 5,000 non-overlapping held-out test captions. We use additional 5,000 non-overlapping validation captions for model selection, as well as for our analysis and visualization in Section 5. We generate binary gold-trees using Benepar (Kitaev and Klein, 2018), an off-the-shelf supervised constituency parser.

Experimental Setup
We notate model variations as d, score, combine. For example, 1, s WS , c ME refers to dimensionality d = 1, weighted sum scoring function (s WS ), and mean pooling combine (c ME ). We train five models for each variation, and select the best checkpoint for each model by maximizing the parse prediction agreement on the validation captions between five models. The agreement is measured by the self-F 1 agreement score (Williams et al., 2018). This procedure is directly adopted from Shi et al. (2019). We use the hyper-parameters from the original implementation without further tuning.   (2019) as Shi2019 and our reproduction (Shi2019 * ). We report mean F 1 and standard deviation for each system and recall for four phrasal categories. Our variants are specified using a representation embedding (d ∈ {1, 2}), a score function (s M : mean, s MHI : mean+HI, s WS : weighted sum), and a combine function (c MX : max, c ME : mean).
We evaluate using gold trees by reporting F 1 scores on the ground-truth constituents and recall on several constituent categories. We report mean and standard deviation across the five models.

Experiments
Quantitative Evaluation Table 1 shows our main results. As the table illustrates, The model variations achieve F 1 scores competitive to the scores reported by Shi et al. (2019) across training setups. They achieve comparable recall on different constituent categories, and robustness to parameter initialization, quantified by self-F 1 , which we report in an expanded version of this table in Appendix A. The model variations closest to the original model, 1, s WS , c ME and 2, s WS , c ME , yield similar performance to the original model across different evaluation categories and metrics, especially in the +HI and +HI+FastText settings. Most remarkably, our simplest variants, which use 1d embeddings and a non-parameterized scoring function, are still competitive (1, s M , c ME ) or even outperform (1, s MHI , c MX ) the original VG-NSL.
Our simplified model variations largely learn the  Table 2: Self-F 1 agreement between two of our variations and the original VG-NSL model. We also report the upper bound scores (U ) calculated by directly comparing two separately trained sets of five original VG-NSL models. same parsing model as the original. Table 2 shows self-F 1 agreement by comparing constituents predicted by our models in each training setting with the original model. We compute this agreement measure by training two sets of five models on the training data, and selecting checkpoints using the validation captions for each of our model variants and the original VG-NSL model. We parse the same validation captions using each model and generate ten parse trees for each caption, one for each model (i.e., five for each distinct set). We calculate self-F 1 agreement between models by comparing parse trees from model variants to parse trees from the original VG-NSL. We permute all 25 (five by five) combinations of variant/VG-NSL pairs and obtain self-F 1 agreement between the model variant and the original VG-NSL by averaging scores from each pair. For the upper-bound agreement calculation, we train two distinct sets of five original VG-NSL models. Our parsing model is very similar but not exactly identical: there is roughly a six points F1 agreement gap in the best case compared to the upper bound. We consider these numbers a worst-case scenario because self-F 1 agreement measures on the validation data are used twice. First, for model selection to eliminate the variance of each five-model set, and second for the variant agreement analysis.

Expressivity Analysis
We analyze the embeddings of the two variants closest to the original   model, 1, s WS , c ME and 2, s WS , c ME , to identify the information they capture. Both behave similarly to the original VG-NSL. Figure 1 visualizes the token embedding space for these variants. Interestingly, the distribution of the 2d token embeddings seems almost linear, suggesting that the additional dimension is largely not utilized during learning, and that both have a strong preference for separating nouns from tokens belonging to other parts of speech. It seems only one core visual signal is used in the model and if this factor is captured, even a 1d model can propagate it through the tree. We hypothesize that the core visual aspect learned, which is captured even in the 1d setting, is noun concreteness. Table 3 shows that the reduced token embeddings have strong correlations with existing estimates of concreteness. Figure 2 shows the ordering of example nouns according to our 1d learned model representation. We observe that the concreteness estimated by our model correlates with nouns that are relatively easier to ground visually in MSCOCO images. For example, nouns like "giraffe" and "elephant" are considered most concrete. These nouns are relatively frequent in MSCOCO (e.g., "elephant" appears 4,633 times in the training captions) and also have a low variance in their appearances. On the other hand, nouns with high variance in images (e.g., "traveller") or abstract nouns (e.g., "chart", "spot") are estimated to have low concreteness. Appendix A includes examples of concreteness.
We quantify the role of concreteness-based noun identification in VG-NSL by modifying test-time captions to replace all nouns with the most concrete token (i.e., "elephant"), measured according  Table 4: F 1 scores evaluated before and after replacing nouns in captions with the most concrete token predicted by models using the 1, s WS , c ME configuration. The replacement occurs during test time only as described in Section 5. In Basic Setting * , we remove one model from 1, s WS , c ME which has a significantly low F 1 agreement (54.2) to the rest of four models using the 1, s WS , c ME configuration.
to the 1d token embeddings learned by our model. We pick the most concrete noun for each training configuration using mean ranking across token embeddings of the five models in each configuration. For example, instead of parsing the original caption "girl holding a picture," we parse "elephant holding an elephant." This uses part-of-speech information to resolve the issue where nouns with low concreteness are treated in the same manner as other part-of-speech tokens. We compare the output tree to the original gold ones for evaluation. We observe that the F 1 score, averaged across the five models, significantly improves from 55.0 to 62.9 for 1, s WS , c ME and from 54.6 to 60.2 for the original VG-NSL before and after our caption modification. The performance increase shows that noun identification via concreteness provides an effective parsing strategy, and further corroborates our hypothesis about what phenomena underlie the strong Shi et al. (2019) result. Table 4 includes the results for the other training settings.

Conclusion and Related Work
We studied the VG-NSL model by introducing several significantly less expressive variants, analyzing their outputs, and showing they maintain, and even improve performance. Our analysis shows that the visual signal leads VG-NSL to rely mostly on estimates of noun concreteness, in contrast to more complex syntactic reasoning. While our model variants are very similar to the original VG-NSL, they are not completely identical, as reflected by the self-F 1 scores in Table 2. Studying this type of difference between expressive models and their less expressive, restricted variants remains an important direction for future work. For example, this can be achieved by distilling the original model to the less expressive variants, and observing both the agree-ment between the models and their performance. In our case, this requires further development of distillation methods for the type of reinforcement learning setup VG-NSL uses, an effort that is beyond the scope of this paper.
Our work is related to the recent inference procedure analysis of Dyer et al. (2019). While they study what biases a specific inference algorithm introduces to the unsupervised parsing problem, we focus on the representation induced in a grounded version of the task. Our empirical analysis is related to Htut et al. (2018), who methodologically, and successfully replicate the results of Shen et al. (2018a) to study their performance. The issues we study generalize beyond the parsing task. The question of what is captured by vision and language models has been studied before, including for visual question answering (Agrawal et al., 2016(Agrawal et al., , 2017Goyal et al., 2017), referring expression resolution (Cirik et al., 2018), and visual navigation (Jain et al., 2019). We ask this question in the setting of syntactic parsing, which allows to ground the analysis in the underlying formalism. Our conclusions are similar: multi-modal models often rely on simple signals, and do not exhibit the complex reasoning we would like them to acquire.  Table 5 is an extended version of Table 1 from Section 5. We include standard deviation for the phrasal category recall and self-F 1 scores evaluated across different parameter initializations. Figure 3 is a larger version of Figure 1 from Section 5. It visualizes the token embeddings of 1, s WS , c ME and 2, s WS , c ME for all universal parts-of-speech categories (Petrov et al., 2012). Figures 4 and 5 show several examples visualizing our learned representations with the 1, s WS , c ME variant, the 1d variant closest to the original model, as a concreteness estimate. Figure 4 shows the most concrete nouns, and Figure 5 shows the least concrete nouns. We selected nouns from the top (bottom) 5% of the data as most (least) concrete. We randomly selected image-caption pairs for these nouns. At the end of the supplementary material, we include tree visualizations, comparing gold trees with phrasal categories, trees generated by the original VG-NSL, and trees generated by our best performing, simplified 1, s MHI , c MX variant. We select the trees to highlight the difference between VG-NSL and our variant. First, we select all development trees where all five VG-NSL models agree to avoid results that are likely due to initialization differences. We do the same for our variant. Finally, we select all trees where the two sets, from VG-NSL and our variant, disagree.  We report mean F 1 and standard deviation for each system and mean recall and standard deviation for four phrasal categories. Our variants are specified using a representation embedding (d ∈ {1, 2}), a score function (s M : mean, s MHI : mean+HI, s WS : weighted sum), and a combine function (c MX : max, c ME : mean).