Francesca Fallucchi


2023

pdf bib
The Dark Side of the Language: Pre-trained Transformers in the DarkNet
Leonardo Ranaldi | Aria Nourbakhsh | Elena Sofia Ruzzetti | Arianna Patrizi | Dario Onorati | Michele Mastromattei | Francesca Fallucchi | Fabio Massimo Zanzotto
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing

Pre-trained Transformers are challenging human performances in many Natural Language Processing tasks. The massive datasets used for pre-training seem to be the key to their success on existing tasks. In this paper, we explore how a range of pre-trained natural language understanding models performs on definitely unseen sentences provided by classification tasks over a DarkNet corpus. Surprisingly, results show that syntactic and lexical neural networks perform on par with pre-trained Transformers even after fine-tuning. Only after what we call extreme domain adaptation, that is, retraining with the masked language model task on all the novel corpus, pre-trained Transformers reach their standard high results. This suggests that huge pre-training corpora may give Transformers unexpected help since they are exposed to many of the possible sentences.

2022

pdf bib
Lacking the Embedding of a Word? Look it up into a Traditional Dictionary
Elena Sofia Ruzzetti | Leonardo Ranaldi | Michele Mastromattei | Francesca Fallucchi | Noemi Scarpato | Fabio Massimo Zanzotto
Findings of the Association for Computational Linguistics: ACL 2022

Word embeddings are powerful dictionaries, which may easily capture language variations. However, these dictionaries fail to give sense to rare words, which are surprisingly often covered by traditional dictionaries. In this paper, we propose to use definitions retrieved in traditional dictionaries to produce word embeddings for rare words. For this purpose, we introduce two methods: Definition Neural Network (DefiNNet) and Define BERT (DefBERT). In our experiments, DefiNNet and DefBERT significantly outperform state-of-the-art as well as baseline methods devised for producing embeddings of unknown words. In fact, DefiNNet significantly outperforms FastText, which implements a method for the same task-based on n-grams, and DefBERT significantly outperforms the BERT method for OOV words. Then, definitions in traditional dictionaries are useful to build word embeddings for rare words.

2020

pdf bib
KERMIT: Complementing Transformer Architectures with Encoders of Explicit Syntactic Interpretations
Fabio Massimo Zanzotto | Andrea Santilli | Leonardo Ranaldi | Dario Onorati | Pierfrancesco Tommasino | Francesca Fallucchi
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Syntactic parsers have dominated natural language understanding for decades. Yet, their syntactic interpretations are losing centrality in downstream tasks due to the success of large-scale textual representation learners. In this paper, we propose KERMIT (Kernel-inspired Encoder with Recursive Mechanism for Interpretable Trees) to embed symbolic syntactic parse trees into artificial neural networks and to visualize how syntax is used in inference. We experimented with KERMIT paired with two state-of-the-art transformer-based universal sentence encoders (BERT and XLNet) and we showed that KERMIT can indeed boost their performance by effectively embedding human-coded universal syntactic representations in neural networks

2010

pdf bib
Generic Ontology Learners on Application Domains
Francesca Fallucchi | Maria Teresa Pazienza | Fabio Massimo Zanzotto
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

In ontology learning from texts, we have ontology-rich domains where we have large structured domain knowledge repositories or we have large general corpora with large general structured knowledge repositories such as WordNet (Miller, 1995). Ontology learning methods are more useful in ontology-poor domains. Yet, in these conditions, these methods have not a particularly high performance as training material is not sufficient. In this paper we present an LSP ontology learning method that can exploit models learned from a generic domain to extract new information in a specific domain. In our model, we firstly learn a model from training data and then we use the learned model to discover knowledge in a specific domain. We tested our model adaptation strategy using a background domain that is applied to learn the isa networks in the Earth Observation Domain as a specific domain. We will demonstrate that our method captures domain knowledge better than other generic models: our model better captures what is expected by domain experts than a baseline method based only on WordNet. This latter is better correlated with non-domain annotators asked to produce the ontology for the specific domain.

pdf bib
Estimating Linear Models for Compositional Distributional Semantics
Fabio Massimo Zanzotto | Ioannis Korkontzelos | Francesca Fallucchi | Suresh Manandhar
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

2009

pdf bib
SVD Feature Selection for Probabilistic Taxonomy Learning
Francesca Fallucchi | Fabio Massimo Zanzotto
Proceedings of the Workshop on Geometrical Models of Natural Language Semantics

pdf bib
Singular Value Decomposition for Feature Selection in Taxonomy Learning
Francesca Fallucchi | Fabio Massimo Zanzotto
Proceedings of the International Conference RANLP-2009

2008

pdf bib
Yet another Platform for Extracting Knowledge from Corpora
Francesca Fallucchi | Fabio Massimo Zanzotto
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

The research field of “extracting knowledge bases from text collections” seems to be mature: its target and its working hypotheses are clear. In this paper we propose a platform, YAPEK, i.e., Yet Another Platform for Extracting Knowledge from corpora, that wants to be the base to collect the majority of algorithms for extracting knowledge bases from corpora. The idea is that, when many knowledge extraction algorithms are collected under the same platform, relative comparisons are clearer and many algorithms can be leveraged to extract more valuable knowledge for final tasks such as Textual Entailment Recognition. As we want to collect many knowledge extraction algorithms, YAPEK is based on the three working hypotheses of the area: the basic hypothesis, the distributional hypothesis, and the point-wise assertion patterns. In YAPEK, these three hypotheses define two spaces: the space of the target textual forms and the space of the contexts. This platform guarantees the possibility of rapidly implementing many models for extracting knowledge from corpora as the platform gives clear entry points to model what is really different in the different algorithms: the feature spaces, the distances in these spaces, and the actual algorithm.