Semantic Matching of Documents from Heterogeneous Collections: A Simple and Transparent Method for Practical Applications

We present a very simple, unsupervised method for the pairwise matching of documents from heterogeneous collections. We demonstrate our method with the Concept-Project matching task, which is a binary classification task involving pairs of documents from heterogeneous collections. Although our method only employs standard resources without any domain- or task-specific modifications, it clearly outperforms the more complex system of the original authors. In addition, our method is transparent, because it provides explicit information about how a similarity score was computed, and efficient, because it is based on the aggregation of (pre-computable) word-level similarities.


Introduction
We present a simple and efficient unsupervised method for pairwise matching of documents from heterogeneous collections. Following Gong et al. (2018), we consider two document collections heterogeneous if their documents differ systematically with respect to vocabulary and / or level of abstraction. With these defining differences, there often also comes a difference in length, which, however, by itself does not make document collections heterogeneous. Examples include collections in which expert answers are mapped to non-expert questions (e.g. InsuranceQA by Feng et al. (2015)), but also so-called community QA collections (Blooma and Kurian (2011)), where the lexical mismatch between Q and A documents is often less pronounced than the length difference. Like many other approaches, the proposed method is based on word embeddings as universal meaning representations, and on vector cosine as the similarity metric. However, instead of computing pairs of document representations and measuring their similarity, our method assesses the document-pair similarity on the basis of selected pairwise word similarities. This has the following advantages, which make our method a viable candidate for practical, real-world applications: efficiency, because pairwise word similarities can be efficiently (pre-)computed and cached, and transparency, because the selected words from each document are available as evidence for what the similarity computation was based on. We demonstrate our method with the Concept-Project matching task (Gong et al. (2018)), which is described in the next section.

Task, Data Set, and Original Approach
The Concept-Project matching task is a binary classification task where each instance is a pair of heterogeneous documents: one concept, which is a short science curriculum item from NGSS 1 , and one project, which is a much longer science project description for school children from ScienceBuddies 2 .
CONCEPT LABEL: ecosystems: -ls2.a: interdependent relationships in ecosystems CONCEPT DESCRIPTION: Ecosystems have carrying capacities , which are limits to the numbers of organisms and populations they can support . These limits result from such factors as the availability of living and nonliving resources and from such challenges such as predation , competition , and disease . Organisms would have the capacity to produce populations of great size were it not for the fact that environments and resources are finite . This fundamental tension affects the abundance ( number of individuals ) of species in any given ecosystem .
PROJECT LABEL: Primary Productivity and Plankton PROJECT DESCRIPTION: Have you seen plankton? I am not talking about the evil villain trying to steal the Krabby Patty recipe from Mr. Krab. I am talking about plankton that live in the ocean. In this experiment you can learn how to collect your own plankton samples and see the wonderful diversity in shape and form of planktonic organisms. The oceans contain both the earth's largest and smallest organisms. Interestingly they share a delicate relationship linked together by what they eat. The largest of the ocean's inhabitants, the Blue Whale, eats very small plankton, which themselves eat even smaller phytoplankton. All of the linkages between predators, grazers, and primary producers in the ocean make up an enormously complicated food web.The base of this food web depends upon phytoplankton, very small photosynthetic organisms which can make their own energy by using energy from the sun. These phytoplankton provide the primary source of the essential nutrients that cycle through our ocean's many food webs. This is called primary productivity, and it is a very good way of measuring the health and abundance of our fisheries.There are many different kinds of phytoplankton in our oceans. [...] One way to study plankton is to collect the plankton using a plankton net to collect samples of macroscopic and microscopic plankton organisms. The net is cast out into the water or trolled behind a boat for a given distance then retrieved. Upon retrieving the net, the contents of the collecting bottle can be removed and the captured plankton can be observed with a microscope. The plankton net will collect both phytoplankton (photosynthetic plankton) and zooplankton (non-photosynthetic plankton and larvae) for observation.In this experiment you will make your own plankton net and use it to collect samples of plankton from different marine or aquatic locations in your local area. You can observe both the abundance (total number of organisms) and diversity (number of different kinds of organisms) of planktonic forms to make conclusions about the productivity and health of each location. In this experiment you will make a plankton net to collect samples of plankton from different locations as an indicator of primary productivity. You can also count the number of phytoplankton (which appear green or brown) compared to zooplankton (which are mostly marine larval forms) and compare. Do the numbers balance, or is there more of one type than the other? What effect do you think this has on productivity cycles? Food chains are very complex. Find out what types of predators and grazers you have in your area. You can find this information from a field guide or from your local Department of Fish and Game. Can you use this information to construct a food web for your local area? Some blooms of phytoplankton can be harmful and create an anoxic environment that can suffocate the ecosystem and leave a "Dead Zone" behind. Did you find an excess of brown algae or diatoms? These can be indicators of a harmful algal bloom. Re-visit this location over several weeks to report on an increase or decrease of these types of phytoplankton. Do you think that a harmful algal bloom could be forming in your area? For an experiment that studies the relationship between water quality and algal bloom events, see the Science Buddies project Harmful Algal Blooms in the Chesapeake Bay. The publicly available data set 3 contains 510 labelled pairs 4 involving C = 75 unique concepts and P = 230 unique projects. A pair is annotated as 1 if the project matches the concept (57%), and as 0 otherwise (43%). The annotation was done by undergrad engineering students. Gong et al. (2018) do not provide any specification, or annotation guidelines, of the semantics of the 'matches' relation to be annotated. Instead, they create gold standard annotations based on a majority vote of three manual annotations. Figure 1 provides an example of a matching C-P pair. The concept labels can be very specific, potentially introducing vocabulary that is not present in the actual concept descriptions. The extent to which this information is used by Gong et al. (2018) is not entirely clear, so we experiment with several setups (cf. Section 4).

Gong et al. (2018)'s Approach
The approach by Gong et al. (2018) is based on the idea that the longer document in the pair is reduced to a set of topics which capture the essence of the document in a way that eliminates the effect of a potential length difference. In order to overcome the vocabulary mismatch, these topics are not based on words and their distributions (as in LSI (Deerwester et al. (1990)) or LDA (Blei et al. (2003))), but on word embedding vectors. Then, basically, matching is done by measuring the cosine similarity between the topic vectors and the short document words. Gong et al. (2018) motivate their approach mainly with the length mismatch argument, which they claim makes approaches relying on document representations (incl. vector averaging) unsuitable. Accordingly, they use Doc2Vec (Le and Mikolov (2014)) as one of their baselines, and show that its performance is inferior to their method. They do not, however, provide a much simpler averaging-based baseline. As a second baseline, they use Word Mover's Distance (Kusner et al. (2015)), which is based on word-level distances, rather than distance of global document representations, but which also fails to be competitive with their topic-based method. Gong et al. (2018) use two different sets of word embeddings: One (topic wiki) was trained on a full English Wikipedia dump, the other (wiki science) on a smaller subset of the former dump which only contained science articles.

Our Method
We develop our method as a simple alternative to that of Gong et al. (2018). We aim at comparable or better classification performance, but with a simpler model. Also, we design the method in such a way that it provides human-interpretable results in an efficient way. One common way to compute the similarity of two documents (i.e. word sequences) c and p is to average over the word embeddings for each sequence first, and to compute the cosine similarity between the two averages afterwards. In the first step, weighting can be applied by multiplying a vector with the TF, IDF, or TF*IDF score of its pertaining word. We implement this standard measure (AVG COS SIM) as a baseline for both our method and for the method by Gong et al. (2018). It yields a single scalar similarity score. The core idea of our alternative method is to turn the above process upside down, by computing the cosine similarity of selected pairs of words from c and p first, and to average over the similarity scores afterwards (cf. also Section 6). More precisely, we implement a measure TOP n COS SIM AVG as the average of the n highest pairwise cosine similarities of the n top-ranking words in c and p. Ranking, again, is done by TF, IDF, and TF*IDF. For each ranking, we take the top-ranking n words from c and p, compute n × n similarities, rank by decreasing similarity, and average over the top n similarities. This measure yields both a scalar similarity score and a list of < c x , p y , sim > tuples, which represent the qualitative aspects of c and p on which the similarity score is based.

Experiments
Setup All experiments are based on off-the-shelf word-level resources: We employ WOMBAT (Müller and Strube (2018)) for easy access to the 840B GloVe (Pennington et al. (2014)) and the GoogleNews 5 Word2Vec (Mikolov et al. (2013)) embeddings. These embedding resources, while slightly outdated, are still widely used. However, they cannot handle out-of-vocabulary tokens due to their fixed, word-level lexicon. Therefore, we also use a pretrained English fastText model 6 (Bojanowski et al. (2017); Grave et al. (2018)), which also includes subword information. IDF weights for approx. 12 mio. different words were obtained from the English Wikipedia dump provided by the Polyglot project (Al-Rfou et al. (2013)). All resources are case-sensitive, i.e. they might contain different entries for words that only differ in case (cf. Section 5). We run experiments in different setups, varying both the input representation (GloVe vs. Google vs. fastText embeddings, ± TF-weighting, and ± IDF-weighting) for concepts and projects, and the extent to which concept descriptions are used: For the latter, Label means only the concept label (first and second row in the example), Description means only the textual description of the concept, and Both means the concatenation of Label and Description. For the projects, we always use both label and description. For the project descriptions, we extract only the last column of the original file (CONTENT), and remove user comments and some boiler-plate. Each instance in the resulting data set is a tuple of < c, p, label >, where c and p are bags of words, with case preserved and function words 7 removed, and label is either 0 or 1.
Parameter Tuning Our method is unsupervised, but we need to define a threshold parameter which controls the minimum similarity that a concept and a project description should have in order to be considered a match. Also, the TOP n COS SIM AVG measure has a parameter n which controls how many ranked words are used from c and p, and how many similarity scores are averaged to create the final score. Parameter tuning experiments were performed on a random subset of 20% of our data set (54% positive). Note that Gong et al. (2018) used only 10% of their 537 instances data set as tuning data. The tuning data results of the best-performing parameter values for each setup can be found in Tables 1 and 2. The top F scores per type of concept input (Label, Description, Both) are given in bold. For AVG COS SIM and TOP n COS SIM AVG, we determined the threshold values (T) on the tuning data by doing a simple .005 step search over the range from 0.3 to 1.0. For TOP n COS SIM AVG, we additionally varied the value of n in steps of 2 from 2 to 30.

Results
The top tuning data scores for AVG COS SIM (Table 1) show that the Google embeddings with TF*IDF weighting yield the top F score for all three concept input types (.881 -.945). Somewhat expectedly, the best overall F score (.945) is produced in the setting Both, which provides the most information. Actually, this is true for all four weighting schemes for both GloVe and Google, while fastText consistently yields its top F scores (.840 -.911) in the Label setting, which provides the least information. Generally, the level of performance of the simple baseline measure AVG COS SIM on this data set is rather striking. For TOP n COS SIM AVG, the tuning data results (Table 2) are somewhat more varied: First, there is no single best performing set of embeddings: Google yields the best F score for the Label setting (.953), while GloVe (though only barely) leads in the Description setting (.912). This time, it is fastText which produces the best F score in the Both setting, which is also the best overall tuning data F score for TOP n COS SIM AVG (.954). While the difference to the Google result for Label is only minimal, it is striking that the best overall score is again produced using the 'richest' setting, i.e. the one involving both TF and IDF weighting and the most informative input.  We then selected the best performing parameter settings for every concept input and ran experiments on the held-out test data. Since the original data split used by Gong et al. (2018) is unknown, we cannot exactly replicate their settings, but we also perform ten runs using randomly selected 10% of our 408 instances test data set, and report average P, R, F, and standard deviation. The results can be found in Table 3. For comparison, the two top rows provide the best results of Gong et al. (2018). The first interesting finding is that the AVG COS SIM measure again performs very well: In all three settings, it beats both the system based on general-purpose embeddings (topic wiki) and the one that is adapted to the science domain (topic science), with again the Both setting yielding the best overall result (.926). Note that our Both setting is probably the one most similar to the concept input used by Gong et al. (2018). This result corroborates our findings on the tuning data, and clearly contradicts the (implicit) claim made by Gong et al. (2018) regarding the infeasibility of document-level matching for documents of different lengths. The second, more important finding is that our proposed TOP n COS SIM AVG measure is also very competitive, as it also outperforms both systems by Gong et al. (2018) Table 3: Test Data Results three settings. It only fails in the setting using only the Description input. 8 This is the more important as we exclusively employ off-the-shelf, general-purpose embeddings, while Gong et al. (2018) reach their best results with a much more sophisticated system and with embeddings that were custom-trained for the science domain. Thus, while the performance of our proposed TOP n COS SIM AVG method is superior to the approach by Gong et al. (2018), it is itself outperformed by the 'baseline' AVG COS SIM method with appropriate weighting. However, apart from raw classification performance, our method also aims at providing human-interpretable information on how a classification was done. In the next section, we perform a detail analysis on a selected setup.

Detail Analysis
The similarity-labelled word pairs from concept and project description which are selected during classification with the TOP n COS SIM AVG measure provide a way to qualitatively evaluate the basis on which each similarity score was computed. We see this as an advantage over average-based comparison (like AVG COS SIM), since it provides a means to check the plausibility of the decision. Here, we are mainly interested in the overall best result, so we perform a detail analysis on the best-performing Both setting only (fastText, TF*IDF weighting, T = .310, n = 14). Since the Concept-Project matching task is a binary classification task, its performance can be qualitatively analysed by providing examples for instances that were classified correctly (True Positive (TP) and True Negative (TN)) or incorrectly (False Positive (FP) and False Negative (FN)). Table 5 shows the concept and project words from selected instances (one TP, FP, TN, and FN case each) of the tuning data set. Concept and project words are ordered alphabetically, with concept words appearing more than once being grouped together. According to the selected setting, the number of word pairs is n = 14. The bottom line in each column provides the average similarity score as computed by the TOP n COS SIM AVG measure. This value is compared against the threshold T = .310. The similarity is higher than T in the TP and FP cases, and lower otherwise. Without going into too much detail, it can be seen that the selected words provide a reasonable idea of the gist of the two documents. Another observation relates to the effect of using unstemmed, case-sensitive documents as input: the top-ranking words often contain inflectional variants (e.g. enzyme and enzymes, level and levels in the example), and words differing in case only can also be found. Currently, these are treated as distinct (though semantically similar) words, mainly out of compatibility with the pretrained GloVe and Google embeddings. However, since our method puts a lot of emphasis on individual words, in particular those coming from the shorter of the two documents (the concept), results might be improved by somehow merging these words (and their respective embedding vectors) (see Section 7).

Related Work
While in this paper we apply our method to the Concept-Project matching task only, the underlying task of matching text sequences to each other is much more general. Many existing approaches follow  the so-called compare-aggregate framework (Wang and Jiang (2017)). As the name suggests, these approaches collect the results of element-wise matchings (comparisons) first, and create the final result by aggregating these results later. Our method can be seen as a variant of compare-aggregate which is characterized by extremely simple methods for comparison (cosine vector similarity) and aggregation (averaging). Other approaches, like He and Lin (2016) and Wang and Jiang (2017), employ much more elaborated supervised neural networks methods. Also, on a simpler level, the idea of averaging similarity scores (rather than scoring averaged representations) is not new: Camacho-Collados and Navigli (2016) use the average of pairwise word similarities to compute their compactness score.

Conclusion and Future Work
We presented a simple method for semantic matching of documents from heterogeneous collections as a solution to the Concept-Project matching task by Gong et al. (2018). Although much simpler, our method clearly outperformed the original system in most input settings. Another result is that, contrary to the claim made by Gong et al. (2018), the standard averaging approach does indeed work very well even for heterogeneous document collections, if appropriate weighting is applied. Due to its simplicity, we believe that our method can also be applied to other text matching tasks, including more 'standard' ones which do not necessarily involve heterogeneous document collections. This seems desirable because our method offers additional transparency by providing not only a similarity score, but also the subset of words on which the similarity score is based. Future work includes detailed error analysis, and exploration of methods to combine complementary information about (grammatically or orthographically) related words from word embedding resources. Also, we are currently experimenting with a pretrained ELMo (Peters et al. (2018)) model as another word embedding resource. ELMo takes word embeddings a step further by dynamically creating contextualized vectors from input word sequences (normally sentences).
Our initial experiments have been promising, but since ELMo tends to yield different, context-dependent vectors for the same word in the same document, ways have still to be found to combine them into single, document-wide vectors, without (fully) sacrificing their context-awareness. The code used in this paper is available at https://github.com/nlpAThits/TopNCosSimAvg.