Visually Grounded Neural Syntax Acquisition

We present the Visually Grounded Neural Syntax Learner (VG-NSL), an approach for learning syntactic representations and structures without any explicit supervision. The model learns by looking at natural images and reading paired captions. VG-NSL generates constituency parse trees of texts, recursively composes representations for constituents, and matches them with images. We define concreteness of constituents by their matching scores with images, and use it to guide the parsing of text. Experiments on the MSCOCO data set show that VG-NSL outperforms various unsupervised parsing approaches that do not use visual grounding, in terms of F1 scores against gold parse trees. We find that VGNSL is much more stable with respect to the choice of random initialization and the amount of training data. We also find that the concreteness acquired by VG-NSL correlates well with a similar measure defined by linguists. Finally, we also apply VG-NSL to multiple languages in the Multi30K data set, showing that our model consistently outperforms prior unsupervised approaches.


Introduction
We study the problem of visually grounded syntax acquisition. Consider the images in Figure 1, paired with the descriptive texts (captions) in English. Given no prior knowledge of English, and sufficient such pairs, one can infer the correspondence between certain words and visual attributes, (e.g., recognizing that "a cat" refers to the objects in the blue boxes). One can further extract constituents, by assuming that concrete spans of words should be processed as a whole, and thus form the Figure 1: We propose to use image-caption pairs to extract constituents from text, based on the assumption that similar spans should be matched to similar visual objects and these concrete spans form constituents.
constituents. Similarly, the same process can be applied to verb or prepositional phrases.
This intuition motivates the use of image-text pairs to facilitate automated language learning, including both syntax and semantics. In this paper we focus on learning syntactic structures, and propose the Visually Grounded Neural Syntax Learner (VG-NSL, shown in Figure 2). VG-NSL acquires syntax, in the form of constituency parsing, by looking at images and reading captions.
At a high level, VG-NSL builds latent constituency trees of word sequences and recursively composes representations for constituents. Next, it matches the visual and textual representations. The training procedure is built on the hypothesis that a better syntactic structure contributes to a better representation of constituents, which then leads to better alignment between vision and language. We use no human-labeled constituency trees or other syntactic labeling (such as part-of-speech tags). Instead, we define a concreteness score of constituents based on their matching with images, and use it to guide the parsing of sentences. At test time, no images paired with the text are needed.
We compare VG-NSL with prior approaches to unsupervised language learning, most of which do not use visual grounding. Our first finding is that VG-NSL improves over the best previous approaches to unsupervised constituency parsing in terms of F 1 scores against gold parse trees. We also find that many existing approaches are quite unstable with respect to the choice of random initialization, whereas VG-NSL exhibits consistent parsing results across multiple training runs. Third, we analyze the performance of different models on different types of constituents, and find that our model shows substantial improvement on noun phrases and prepositional phrases which are common in captions. Fourth, VG-NSL is much more data-efficient than prior work based purely on text, achieving comparable performance to other approaches using only 20% of the training captions. In addition, the concreteness score, which emerges during the matching between constituents and images, correlates well with a similar measure defined by linguists. Finally, VG-NSL can be easily extended to multiple languages, which we evaluate on the Multi30K data set (Elliott et al., 2016(Elliott et al., , 2017 consisting of German and French image captions.

Related Work
Linguistic structure induction from text. Recent work has proposed several approaches for inducing latent syntactic structures, including constituency trees (Choi et al., 2018;Yogatama et al., 2017;Maillard and Clark, 2018;Havrylov et al., 2019;Kim et al., 2019;Drozdov et al., 2019) and dependency trees (Shi et al., 2019), from the distant supervision of downstream tasks. However, most of the methods are not able to produce linguistically sound structures, or even consistent ones with fixed data and hyperparameters but different random initializations (Williams et al., 2018).
A related line of research is to induce latent syntactic structure via language modeling. This approach has achieved remarkable performance on unsupervised constituency parsing (Shen et al., 2018a(Shen et al., , 2019, especially in identifying the boundaries of higher-level (i.e., larger) constituents. To our knowledge, the Parsing-Reading-Predict Network (PRPN; Shen et al., 2018a) and the Ordered Neuron LSTM (ON-LSTM; Shen et al., 2019) currently produce the best fully unsupervised constituency parsing results. One issue with PRPN, however, is that it tends to produce meaningless parses for lower-level (smaller) constituents (Phu Mon Htut et al., 2018).
Over the last two decades, there has been extensive study targeting unsupervised constituency parsing (Klein and Manning, 2002, 2004, 2005Bod, 2006a,b;Ponvert et al., 2011) anddependency parsing (Klein andManning, 2004;Smith and Eisner, 2006;Spitkovsky et al., 2010;Han et al., 2017). However, all of these approaches are based on linguistic annotations. Specifically, they operate on the part-of-speech tags of words instead of word tokens. One exception is Spitkovsky et al. (2011), which produces dependency parse trees based on automatically induced pseudo tags.
In contrast to these existing approaches, we focus on inducing constituency parse trees with visual grounding. We use parallel data from another modality (i.e., paired images and captions), instead of linguistic annotations such as POS tags. We include a detailed comparison between some related works in the supplementary material.
There has been some prior work on improving unsupervised parsing by leveraging extra signals, such as parallel text (Snyder et al., 2009), annotated data in another language with parallel text (Ganchev et al., 2009), annotated data in other languages without parallel text (Cohen et al., 2011), or non-parallel text from multiple languages (Cohen and Smith, 2009). We leave the integration of other grounding signals as future work.
Grounded language acquisition. Grounded language acquisition has been studied for imagecaption data (Christie et al., 2016a), video-caption data (Siddharth et al., 2014;Yu et al., 2015), and visual reasoning (Mao et al., 2019). However, existing approaches rely on human labels or rules for classifying visual attributes or actions. Instead, our model induces syntax structures with no humandefined labels or rules.

Visually Grounded Neural Syntax Learner
Given a set of paired images and captions, our goal is to learn representations and structures for words and constituents. Toward this goal, we propose the Visually Grounded Neural Syntax Learner (VG-NSL), an approach for the grounded acquisition of syntax of natural language. VG-NSL is inspired by the idea of semantic bootstrapping (Pinker, 1984), which suggests that children acquire syntax by first understanding the meaning of words and phrases, and linking them with the syntax of words.
At a high level (Figure 2), VG-NSL consists of 2 modules. First, given an input caption (i.e., a sentence or a smaller constituent), as a sequence of tokens, VG-NSL builds a latent constituency parse tree, and recursively composes representations for every constituent. Next, it matches textual representations with visual inputs, such as the paired image with the constituents. Both modules are jointly optimized from natural supervision: the model acquires constituency structures, composes textual representations, and links them with visual scenes, by looking at images and reading paired captions.

Textual Representations and Structures
VG-NSL starts by composing a binary constituency structure of text, using an easy-first bottom-up parser (Goldberg and Elhadad, 2010). The composition of the tree from a caption of length n consists of n−1 steps. Let k ) denote the textual representations of a sequence of constituents after step t, where k = n − t. For simplicity, we use X (0) to denote the word embeddings for all tokens (the initial representations).
At step t, a score function score(·; Θ), parameterized by Θ, is evaluated on all pairs of consecutive constituents, resulting in a vector score(X (t−1) ; Θ) of length n − t: We implement score(·; Θ) as a two-layer feedforward network.
A pair of constituents x j * +1 is sampled from all pairs of consecutive constituents, with respect to the distribution produced by a softmax: 2 The selected pair is combined to form a single new constituent. Thus, after step t, the number of constituents is decreased by 1. The textual representation for the new constituent is defined as the L2normed sum of the two component constituents: 2 At test time, we take the argmax.
a cat is on the ground Step #1: Step #3: 0.25 0.15 0.6 (a cat) is (on (the ground)) Step #4: 0.35 0.65 (a cat) (is (on (the ground))) Step #5: 1.0 ((a cat) (is (on (the ground)))) Figure 3: An illustration of how VG-NSL composes a constituency parse tree. At each step, the score function score is evaluated on all pairs of consecutive constituents (dashed lines). Next, a pair of constituents is sampled from all pairs w.r.t. a distribution computed by the softmax of all predicted scores. The selected pair of constituents is combined into a larger one, while the other constituents remain unchanged (solid lines).
We find that using a more complex encoder for constituents, such as GRUs, will cause the representations to be highly biased towards a few salient words in the sentence (e.g., the encoder encodes only the word "cat" while ignoring the rest part of the caption; Shi et al., 2018a;Wu et al., 2019). This significantly degrades the performance of linguistic structure induction.
We repeat this score-sample-combine process for n − 1 steps, until all words in the input text have been combined into a single constituent (Figure 3). This ends the inference of the constituency parse tree. Since at each time step we combine two consecutive constituents, the derived tree t contains 2n − 1 constituents (including all words).

Visual-Semantic Embeddings
We follow an approach similar to that of Kiros et al. (2014) to define the visual-semantic embedding (VSE) space for paired images and text constituents. Let v (i) denote the vector representation of an image i, and c (i) j denote the vector representation of the j-th constituent of its corresponding text caption. During the matching with images, we ignore the tree structure and index them as a list of constituents. A function m(·, ·; Φ) is defined as the matching score between images and texts: where the parameter vector Φ aligns the visual and textual representations into a joint space.

Training
We optimize the visual-semantic representations (Φ) and constituency structures (Θ) in an alternating approach. At each iteration, given constituency parsing results of caption, Φ is optimized for matching the visual and the textual representations. Next, given the visual grounding of constituents, Θ is optimized for producing constituents that can be better matched with images. Specifically, we optimize textual representations and the visual-semantic embedding space using a hinge-based triplet ranking loss: where i and k index over all image-caption pairs in the data set, while j and enumerate all constituents of a specific caption (c (i) and c (k) , respec- j } is the set of textual representations of all constituents, and δ is a constant margin, [·] + denotes max(0, ·). The loss L extends the loss for image-caption retrieval of Kiros et al. (2014), by introducing the alignments between images and sub-sentence constituents.
We optimize textual structures via distant supervision: they are optimized for a better alignment between the derived constituents and the images. Intuitively, the following objective encourages adjectives to be associated (combined) with the corresponding nouns, and verbs/prepositions to be associated (combined) with the corresponding subjects and objects. Specifically, we use REINFORCE (Williams, 1992) as the gradient estimator for Θ. Consider the parsing process of a specific caption c (i) , and denote the corresponding image embedding v (i) . For a constituent z of c (i) , we define its (visual) concreteness concrete(z, v (i) ) as: where δ is a fixed margin. At step t, we define the reward function for a combination of a pair of constituents (x In plain words, at each step, we encourage the model to compose a constituent that maximizes the alignment between the new constituent and the corresponding image. During training, we sample constituency parse trees of captions, and reinforce each composition step using Equation 2. During test, no paired images of text are needed.

The Head-Initial Inductive Bias
English and many other Indo-European languages are usually head-initial (Baker, 2001). For example, in verb phrases or prepositional phrases, the verb (or the preposition) precedes the complements (e.g., the object of the verb). Consider the simple caption a white cat on the lawn. While the association of the adjective (white) could be induced from the visual grounding of phrases, whether the preposition (on) should be associated with a white cat or the lawn is more challenging to induce. Thus, we impose an inductive bias to guide the learner to correctly associate prepositions with their complements, determiners with corresponding noun phrases, and complementizers with the corresponding relative clauses. Specifically, we discourage abstract constituents (i.e., constituents that cannot be grounded in the image) from being combined with a preceding constituent, by modifying the original reward definition (Equation 2) as: where λ is a scalar hyperparameter, v (i) is the image embedding corresponding to the caption being parsed, and abstract denotes the abstractness of the span, defined analogously to concreteness (Equation 1): The intuition here is that the initial heads for prepositional phrases (e.g., on) and relative clauses (e.g., which, where) are usually abstract words. During training, we encourage the model to associate these abstract words with the succeeding constituents instead of the preceding ones. It is worth noting that such an inductive bias is languagespecific, and cannot be applied to head-final languages such as Japanese (Baker, 2001). We leave the design of head-directionality inductive biases for other languages as future work.

Experiments
We evaluate VG-NSL for unsupervised parsing in a few ways: F 1 score with gold trees, selfconsistency across different choices of random initialization, performance on different types of constituents, and data efficiency. In addition, we find that the concreteness score acquired by VG-NSL is consistent with a similar measure defined by linguists. We focus on English for the main experiments, but also extend to German and French.

Data Sets and Metrics
We use the standard split of the MSCOCO data set (Lin et al., 2014), following Karpathy and Fei-Fei (2015). It contains 82,783 images for training, 1,000 for development, and another 1,000 for testing. Each image is associated with 5 captions.
For the evaluation of constituency parsing, the Penn Treebank (PTB; Marcus et al., 1993) is a widely used, manually annotated data set. However, PTB consists of sentences from abstract domains, e.g., the Wall Street Journal (WSJ), which are not visually grounded and whose linguistic structures can hardly be induced by VG-NSL. Here we evaluate models on the MSCOCO test set, which is well-matched to the training domain; we leave the extension of our work to more abstract domains to future work. We apply Benepar (Kitaev and Klein, 2018), 3 an off-the-shelf constituency parser with state-of-the-art performance (95.52 F 1 score) on the WSJ test set, 4 to parse the captions in the MSCOCO test set as gold constituency parse trees. We evaluate all of the investigated models using the F 1 score compared to these gold parse trees. 5

Baselines
We compare VG-NSL with various baselines for unsupervised tree structure modeling of texts. We can categorize the baselines by their training objective or supervision.
Trivial tree structures. Similarly to recent work on latent tree structures (Williams et al., 2018;Phu Mon Htut et al., 2018;Shi et al., 2018b), we include three types of trivial baselines without linguistic information: random binary trees, left-branching binary trees, and right-branching binary trees.
Syntax acquisition by language modeling and statistics. Shen et al. (2018a) proposes the Parsing-Reading-Predict Network (PRPN), which predicts syntactic distances (Shen et al., 2018b) between adjacent words, and composes a binary tree based on the syntactic distances to improve language modeling. The learned distances can be mapped into a binary constituency parse tree, by recursively splitting the sentence between the two consecutive words with the largest syntactic distance.
Ordered neurons (ON-LSTM; Shen et al., 2019) is a recurrent unit based on the LSTM cell (Hochreiter and Schmidhuber, 1997) that explicitly regularizes different neurons in a cell to represent shortterm or long-term information. After being trained on the language modeling task, Shen et al. (2019) suggest that the gate values in ON-LSTM cells can be viewed as syntactic distances (Shen et al., 2018b) between adjacent words to induce latent tree structures. ON-LSTM has the state-of-the-art unsupervised constituency parsing performance on the WSJ test set. We train both PRPN and ON-LSTM on all captions in the MSCOCO training set and use the models as baselines.
Inspired by the syntactic distance-based approaches (Shen et al., 2018a(Shen et al., , 2019, we also introduce another baseline, PMI, which uses negative pointwise mutual information (Church and Hanks, 1990) between adjacent words as the syntactic distance. We compose constituency parse trees based on the distances in the same way as PRPN and ON-LSTM.
Syntax acquisition from downstream tasks. Choi et al. (2018) propose to compose binary constituency parse trees directly from downstream tasks using the Gumbel softmax trick (Jang et al., 2017). We integrate a Gumbel tree-based caption encoder into the visual semantic embedding approach (Kiros et al., 2014). The model is trained on the downstream task of image-caption retrieval.
Syntax acquisition from concreteness estimation. Since we apply concreteness information to train VG-NSL, it is worth comparing against unsupervised constituency parsing based on previous approaches for predicting word concreteness. This set of baselines includes semi-supervised estimation (Turney et al., 2011), crowdsourced labeling (Brysbaert et al., 2014, and multimodal estimation (Hessel et al., 2018). Note that none of these approaches has been applied to unsupervised constituency parsing. Implementation details can be found in the supplementary material.
Based on the concreteness score of words, we introduce another baseline similar to VG-NSL. Specifically, we recursively combine two consecutive constituents with the largest average concreteness, and use the average concreteness as the score for the composed constituent. The algorithm generates binary constituency parse trees of captions. For a fair comparison, we implement a variant of this algorithm that also uses a head-initial inductive bias and include the details in the appendix.

Implementation Details
Across all experiments and all models (including baselines such as PRPN, ON-LSTM, and Gumbel), the embedding dimension for words and constituents is 512. For VG-NSL, we use a pre-trained ResNet-101 (He et al., 2016), trained on ImageNet (Russakovsky et al., 2015), to extract vector embeddings for images. Thus, Φ is a mapping from a 2048-D image embedding space to a 512-D visualsemantic embedding space. As for the score function in constituency parsing, we use a hidden dimension of 128 and ReLU activation. All VG-NSL models are trained for 30 epochs. We use an Adam optimizer (Kingma and Ba, 2015) with initial learning rate 5 × 10 −4 to train VG-NSL. The learning  Table 1: Recall of specific typed phrases, and overall F 1 score, evaluated on the MSCOCO test split, averaged over 5 runs with different random initializations. We also include self-agreement F 1 score (Williams et al., 2018) across the 5 runs. ± denotes standard deviation. * denotes models requiring extra labels and/or corpus, and † denotes models requiring a pre-trained visual feature extractor. We highlight the best number in each column among all models that do not require extra data other than paired image-caption data, as well as the overall best number. The Left, Right, PMI, and concreteness estimation-based models have no standard deviation or self F 1 (shown as N/A) as they are deterministic given the training and/or testing data.
rate is re-initialized to 2.5 × 10 −4 after 15 epochs. We tune other hyperparameters of VG-NSL on the development set using the self-agreement F 1 score (Williams et al., 2018) over 5 runs with different choices of random initialization.

Results: Unsupervised Constituency Parsing
We evaluate the induced constituency parse trees via the overall F 1 score, as well as the recall of four types of constituents: noun phrases (NP), verb phrases (VP), prepositional phrases (PP), and adjective phrases (ADJP) ( Table 1). We also evaluate the robustness of models trained with fixed data and hyperparameters, but different random initialization, in two ways: via the standard deviation of performance across multiple runs, and via the selfagreement F 1 score (Williams et al., 2018), which is the average F 1 taken over pairs of different runs. Among all of the models which do not require extra labels, VG-NSL with the head-initial inductive bias (VG-NSL+HI) achieves the best F 1 score. PRPN (Shen et al., 2018a) and a concreteness estimation-based baseline (Hessel et al., 2018) both produce competitive results. It is worth noting that the PRPN baseline reaches this performance without any information from images. However, the performance of PRPN is less stable than that of VG-NSL across random initializations. In contrast to its state-of-the-art performance on the WSJ full set (Shen et al., 2019), we observe that ON-LSTM does not perform well on the MSCOCO caption data set. However, it remains the best model for adjective phrases, which is consistent with the result reported by Shen et al. (2019).
In addition to the best overall F 1 scores, VG-NSL+HI achieves competitive scores across most phrase types (NP, VP and PP). Our models (VG-NSL and VG-NSL+HI) perform the best on NP and PP, which are the most common visually grounded phrases in the MSCOCO data set. In addition, our models produce much higher self F 1 than the baselines (Shen et al., 2018a(Shen et al., , 2019Choi et al., 2018), showing that they reliably produce reasonable constituency parse trees with different initialization.
We also test the effectiveness of using pretrained word embeddings. Specifically, for VG-NSL+HI+FastText, we use a pre-trained FastText  embedding (300-D, Joulin et al., 2016), concatenated with a 212-D trainable embedding, as the word embedding. Using pre-trained word embeddings further improves performance to an average F 1 of 54.4% while keeping a comparable self F 1 .

Results: Data Efficiency
We compare the data efficiency for PRPN (the strongest baseline method), ON-LSTM, VG-NSL, and VG-NSL+HI. We train the models using 1%, 2%, 5%, 10%, 20%, 50% and 100% of the MSCOCO training set, and report the overall F 1 and self F 1 scores on the test set ( Figure 4).
Compared to PRPN trained on the full training set, VG-NSL and VG-NSL+HI reach comparable performance using only 20% of the data (i.e., 8K images with 40K captions). VG-NSL tends to quickly become more stable (in terms of the self F 1 score) as the amount of data increases, while PRPN and ON-LSTM remain less stable.

Analysis: Consistency with Linguistic Concreteness
During training, VG-NSL acquires concreteness estimates for constituents via Equation 1. Here, we evaluate the consistency between word-level concreteness estimates induced by VG-NSL and those produced by other methods (Turney et al., 2011;Brysbaert et al., 2014;Hessel et al., 2018). Specifically, we measure the correlation between the con-    (Table 2). For any word, of which the representation is z, we estimate its concreteness by taking the average of concrete(z, v (i) ), across all associated images v (i) . The high correlation between VG-NSL and the concreteness scores produced by Turney et al. (2011) andBrysbaert et al. (2014) supports the argument that the linguistic concept of concreteness can be acquired in an unsupervised way. Our model also achieves a high correlation with Hessel et al. (2018), which also estimates word concreteness based on visual-domain information.
where F 1 (·, ·) denotes the F 1 score between the trees generated by two models, N the number of   (Young et al., 2014;Elliott et al., 2016Elliott et al., , 2017, averaged over 5 runs with different random initialization. ± denotes the standard deviation. different runs, and δ the margin to ensure only nearby checkpoints are compared. 6 After finding the best hyperparameters H 0 , we train the model for another N times with different random initialization, and select the best models by We compare the performance of VG-NSL selected by the self F 1 score and that selected by recall at 1 in image-to-text retrieval (R@1 in Table 3; Kiros et al., 2014). As a model selection criterion, self F 1 consistently outperforms R@1 (avg. F 1 : 50.4 vs. 47.7 and 53.3 vs. 53.1 for VG-NSL and VG-NSL+HI, respectively). Meanwhile, it is worth noting that even if we select VG-NSL by R@1, it shows better stability compared with PRPN and ON-LSTM (Table 1), in terms of the score variance across different random initialization and self F 1 . Specifically, the variance of avg. F 1 is always less than 0.6 while the self F 1 is greater than 80.
Note that the PRPN and ON-LSTM models are not tuned using self F 1 , since these models are usually trained for hundreds or thousands of epochs and thus it is computationally expensive to evaluate self F 1 . We leave the efficient tuning of these baselines by self F 1 as a future work.

Extension to Multiple Languages
We extend our experiments to the Multi30K data set, which is built on the Flickr30K data set (Young et al., 2014) and consists of English, German (Elliott et al., 2016), andFrench (Elliott et al., 2017) captions. For Multi30K, there are 29,000 images in the training set, 1,014 in the development set and 1,000 in the test set. Each image is associated with one caption in each language.
We compare our models to PRPN and ON-LSTM in terms of overall F 1 score (Table 4). VG-NSL with the head-initial inductive bias consis-6 In all of our experiments, N = 5, δ = 2. tently performs the best across the three languages, all of which are highly head-initial (Baker, 2001). Note that the F 1 scores here are not comparable to those in Table 1, since Multi30K (English) has 13x fewer captions than MSCOCO.

Discussion
We have proposed a simple but effective model, the Visually Grounded Neural Syntax Learner, for visually grounded language structure acquisition. VG-NSL jointly learns parse trees and visually grounded textual representations. In our experiments, we find that this approach to grounded language learning produces parsing models that are both accurate and stable, and that the learning is much more data-efficient than a state-of-the-art text-only approach. Along the way, the model acquires estimates of word concreteness.
The results suggest multiple future research directions. First, VG-NSL matches text embeddings directly with embeddings of entire images. Its performance may be boosted by considering structured representations of both images (e.g., Lu et al., 2016;Wu et al., 2019) and texts (Steedman, 2000). Second, thus far we have used a shared representation for both syntax and semantics, but it may be useful to disentangle their representations (Steedman, 2000). Third, our best model is based on the head-initial inductive bias. Automatically acquiring such inductive biases from data remains challenging (Kemp et al., 2006;Gauthier et al., 2018). Finally, it may be possible to extend our approach to other linguistic tasks such as dependency parsing (Christie et al., 2016b), coreference resolution (Kottur et al., 2018), and learning pragmatics beyond semantics (Andreas and Klein, 2016).
There are also limitations to the idea of grounded language acquisition. In particular, the current approach has thus far been applied to understanding grounded texts in a single domain (static visual scenes for VG-NSL). Its applicability could be extended by learning shared representations across multiple modalities (Castrejon et al., 2016) or integrating with pure text-domain models (such as PRPN, Shen et al., 2018a).

Supplementary Material
The supplementary material is organized as follows. First, in Section A, we summarize and compare existing models for constituency parsing without explicit syntactic supervision. Next, in Section B, we present more implementation details of VG-NSL. Third, in Section C, we present the implementation details for all of our baseline models. Fourth, in Section D, we present the evaluation details of Benepar (Kitaev and Klein, 2018) on the MSCOCO data set. Fifth, in Section E, we qualitatively and quantitatively compare the concreteness scores estimated or labeled by different methods. Finally, in Section F, we show sample trees generated by VG-NSL on the MSCOCO test set.

A Overview of Models for Constituency
Parsing without Explicit Syntactic Supervision Shown in Table 5, we compare existing models for constituency parsing without explicit syntactic supervision, with respect to their learning objective, dependence on extra labels or extra corpus, and other features. The table also includes the analysis of previous works on parsing sentences based on gold part-of-speech tags.   Brysbaert et al., 2014;Hessel et al., 2018) have not been applied to unsupervised parsing so far.

B Implementation Details for VG-NSL
We adopt the code released by Faghri et al. (2018) 7 as the visual-semantic embedding module for VG-NSL. Following them, we fix the margin δ to 0.2. We also use the vocabulary provided by Faghri et al. Hyperparameter tuning. As stated in main text, we use the self-agreement F 1 score (Williams et al., 2018) as an unsupervised signal for tuning all hyperparamters. Besides the learning rate and other conventional hyperparameters, we also tune λ, the hyperparameter for the head-initial bias model. λ indicates the weight of penalization for "right abstract constituents". We choose λ from {1, 2, 5, 10, 20, 50, 100} and found that λ = 20 gives the best self-agreement F 1 score.

C Implementation Details for Baselines
Trivial tree structures. We show examples for left-branching binary trees and right-branching binary trees in Figure 5. As for binary random trees, we iteratively combine two randomly selected adjacent constituents. This procedure is similar to that shown in Algorithm 2.

Parsing-Reading-Predict Network (PRPN).
We use the code released by Shen et al. (2018a) to train PRPN. 9 We tune the hyperparameters with respect to language modeling perplexity (Jelinek et al., 1977). For a fair comparison, we fix the hidden dimension of all hidden layers of PRPN as 512. We use an Adam optimizer (Kingma and Ba, 2015) to optimize the parameters. The tuned parameters are number of layers (1, 2, 3) and learning rate (1 × 10 −3 , 5 × 10 −4 , 2 × 10 −4 ). The models are trained for 100 epochs on the MSCOCO dataset and 1,000 epochs on the Multi30K dataset, and are early stopped using the criterion of language model perplexity.
Ordered Neurons (ON-LSTM). We use the code release by Shen et al. (2019) to train ON-LSTM. 10 We tune the hyperparameters with respect to language modeling perplexity (Jelinek et al., 1977), and use perplexity as an early stopping criterion. For a fair comparison, the hidden dimension of all hidden layers is set to 512, and the chunk size is changed to 16 to fit the hidden layer size. Following the original paper (Shen et al., 2019), we set the number of layers to be 3, and report the constituency parse tree with respect to the gate values output by the second layer of ON-LSTM. In order to obtain a better perplexity, we explore both Adam (Kingma and Ba, 2015) and SGD as the optimizer. We tune the learning rate (1 × 10 −3 , Turney et al. (2011) Brysbaert et al. (2014    to estimate PMI between words. 13 The PMI is then used to compute similarity between seen and unseen words, which is further used as weights to estimate concreteness for unseen words. For the concreteness scores from crowdsourcing, we use the released data set of Brysbaert et al. (2014). 14 Similarly to VG-NSL, the multimodal concreteness score (Hessel et al., 2018) is also estimated on the MSCOCO training set, using an open-sourced implementation. 15 Constituency parsing with concreteness scores.
Denote α(w) as the concreteness score estimated by a model for the word w. Given a sequence of concreteness scores of caption tokens denoted by (α(w 1 ), α(w 2 ), . . . , α(w m )), we aim to produce a binary constituency parse tree. We first normalize the concreteness scores to the range of [−1, 1], via: 16 We treat unseen words (i.e., out-of-vocabulary words) in the same way in VG-NSL, by assigning , we let α(w) = log α(w) before normalizing, as the original scores are in the range of (0, +∞). the concreteness of −1 to unseen words, with the assumption that unseen words are the most abstract ones.
We compose constituency parse trees using the normalized concreteness scores by iteratively combining consecutive constituents. At each step, we select two adjacent constituents (initially, words) with the highest average concreteness score and combine them into a larger constituent, of which the concreteness is the average of its children. We repeat the above procedure until there is only one constituent left.
As for the head-initial inductive bias, we weight the concreteness of the right constituent with a hyperparemeter τ > 1 when ranking all pairs of consecutive constituents during selection. Meanwhile, the concreteness of the composed constituent remains the average of the two component constituents. In order to keep consistent with VG-NSL, we set τ = 20 in all of our experiments.
The procedure is summarized in Algorithm 2.

D Details of Manual Ground Truth Evaluation
It is important to confirm that the constituency parse trees of the MSCOCO captions produced by Benepar (Kitaev and Klein, 2018) are of high enough qualities, so that they can serve as reliable ground truth for further evaluation of other models.
To verify this, we randomly sample 50 captions Figure 7: A failure example by Benepar, where it fails to parse the noun phrase "three white sinks in a bathroom under mirrors" -according to human commonsense, it is much more common for sinks, rather than a bathroom, to be under mirrors. However, most of the constituents (e.g., "three white sinks" and "under mirrors") are still successfully extracted by Benepar.
Algorithm 2: Constituency parsing based on concreteness estimation.
Input: list of normalized concreteness scores a = (a 1 , a 2 , . . . , a m ), hyperparameter τ Output: Boundaries of constituents B = {(L i , R i )} i=1,...,m−1 for j = 1 to m do left j = j right j = j end while len(a) > 1 do p = arg max j a j + τ a j+1 add (left p , right p+1 ) to B a = a <p + ( ap+a p+1 2 ) + a >p+1 left = left <p + (left p ) + left >p+1 right = right <p + (right p+1 ) + right >p+1 end from the MSCOCO test split, and manually label the constituency parse trees without reference to either Benepar or the paired images, following the principles by Bies et al. (1995) as much as possible. 17 Note that we only label the tree structures 17 The manually labeled constituency parse trees are publicly available at https://ttic.uchicago.edu/ freda/vgnsl/manually_labeled_trees.txt without constituency labels (e.g., NP and PP). Most failure cases by Benepar are related to human commonsense in resolving parsing ambiguities, e.g., prepositional phrase attachments (Figure 7).
We compare the manually labeled trees and those produced by Benepar (Kitaev and Klein, 2018), and find that the F 1 score between them are 95.65.

E Concreteness by Different Models E.1 Correlation between Different Concreteness Estimations
We report the correlation of different methods for concreteness estimation, shown in (Table 6)

E.2 Concreteness Scores of Sample Words by Different Methods
We present the concreteness scores estimated or labeled by different methods in Figure 6, which qualitatively shows that different methods correlate with others well.
F Sample Trees Generated by VG-NSL Figure 8 shows the sample trees generated by VG-NSL with the head-initial inductive bias (VG-NSL+HI). All captions are chosen from the MSCOCO test set.