Danilo Croce


2023

pdf bib
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations
Danilo Croce | Luca Soldaini
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations

2022

pdf bib
Learning to Generate Examples for Semantic Processing Tasks
Danilo Croce | Simone Filice | Giuseppe Castellucci | Roberto Basili
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Even if recent Transformer-based architectures, such as BERT, achieved impressive results in semantic processing tasks, their fine-tuning stage still requires large scale training resources. Usually, Data Augmentation (DA) techniques can help to deal with low resource settings. In Text Classification tasks, the objective of DA is the generation of well-formed sentences that i) represent the desired task category and ii) are novel with respect to existing sentences. In this paper, we propose a neural approach to automatically learn to generate new examples using a pre-trained sequence-to-sequence model. We first learn a task-oriented similarity function that we use to pair similar examples. Then, we use these example pairs to train a model to generate examples. Experiments in low resource settings show that augmenting the training material with the proposed strategy systematically improves the results on text classification and natural language inference tasks by up to 10% accuracy, outperforming existing DA approaches.

2021

pdf bib
Learning to Solve NLP Tasks in an Incremental Number of Languages
Giuseppe Castellucci | Simone Filice | Danilo Croce | Roberto Basili
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

In real scenarios, a multilingual model trained to solve NLP tasks on a set of languages can be required to support new languages over time. Unfortunately, the straightforward retraining on a dataset containing annotated examples for all the languages is both expensive and time-consuming, especially when the number of target languages grows. Moreover, the original annotated material may no longer be available due to storage or business constraints. Re-training only with the new language data will inevitably result in Catastrophic Forgetting of previously acquired knowledge. We propose a Continual Learning strategy that updates a model to support new languages over time, while maintaining consistent results on previously learned languages. We define a Teacher-Student framework where the existing model “teaches” to a student model its knowledge about the languages it supports, while the student is also trained on a new language. We report an experimental evaluation in several tasks including Sentence Classification, Relational Learning and Sequence Labeling.

2020

pdf bib
GAN-BERT: Generative Adversarial Learning for Robust Text Classification with a Bunch of Labeled Examples
Danilo Croce | Giuseppe Castellucci | Roberto Basili
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Recent Transformer-based architectures, e.g., BERT, provide impressive results in many Natural Language Processing tasks. However, most of the adopted benchmarks are made of (sometimes hundreds of) thousands of examples. In many real scenarios, obtaining high- quality annotated data is expensive and time consuming; in contrast, unlabeled examples characterizing the target task can be, in general, easily collected. One promising method to enable semi-supervised learning has been proposed in image processing, based on Semi- Supervised Generative Adversarial Networks. In this paper, we propose GAN-BERT that ex- tends the fine-tuning of BERT-like architectures with unlabeled data in a generative adversarial setting. Experimental results show that the requirement for annotated examples can be drastically reduced (up to only 50-100 annotated examples), still obtaining good performances in several sentence classification tasks.

2019

pdf bib
Auditing Deep Learning processes through Kernel-based Explanatory Models
Danilo Croce | Daniele Rossini | Roberto Basili
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

While NLP systems become more pervasive, their accountability gains value as a focal point of effort. Epistemological opaqueness of nonlinear learning methods, such as deep learning models, can be a major drawback for their adoptions. In this paper, we discuss the application of Layerwise Relevance Propagation over a linguistically motivated neural architecture, the Kernel-based Deep Architecture, in order to trace back connections between linguistic properties of input instances and system decisions. Such connections then guide the construction of argumentations on network’s inferences, i.e., explanations based on real examples, semantically related to the input. We propose here a methodology to evaluate the transparency and coherence of analogy-based explanations modeling an audit stage for the system. Quantitative analysis on two semantic tasks, i.e., question classification and semantic role labeling, show that the explanatory capabilities (native in KDAs) are effective and they pave the way to more complex argumentation methods.

2018

pdf bib
Explaining non-linear Classifier Decisions within Kernel-based Deep Architectures
Danilo Croce | Daniele Rossini | Roberto Basili
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

Nonlinear methods such as deep neural networks achieve state-of-the-art performances in several semantic NLP tasks. However epistemologically transparent decisions are not provided as for the limited interpretability of the underlying acquired neural models. In neural-based semantic inference tasks epistemological transparency corresponds to the ability of tracing back causal connections between the linguistic properties of a input instance and the produced classification output. In this paper, we propose the use of a methodology, called Layerwise Relevance Propagation, over linguistically motivated neural architectures, namely Kernel-based Deep Architectures (KDA), to guide argumentations and explanation inferences. In such a way, each decision provided by a KDA can be linked to real examples, linguistically related to the input instance: these can be used to motivate the network output. Quantitative analysis shows that richer explanations about the semantic and syntagmatic structures of the examples characterize more convincing arguments in two tasks, i.e. question classification and semantic role labeling.

2017

pdf bib
Structured Learning for Context-aware Spoken Language Understanding of Robotic Commands
Andrea Vanzo | Danilo Croce | Roberto Basili | Daniele Nardi
Proceedings of the First Workshop on Language Grounding for Robotics

Service robots are expected to operate in specific environments, where the presence of humans plays a key role. A major feature of such robotics platforms is thus the ability to react to spoken commands. This requires the understanding of the user utterance with an accuracy able to trigger the robot reaction. Such correct interpretation of linguistic exchanges depends on physical, cognitive and language-dependent aspects related to the environment. In this work, we present the empirical evaluation of an adaptive Spoken Language Understanding chain for robotic commands, that explicitly depends on the operational environment during both the learning and recognition stages. The effectiveness of such a context-sensitive command interpretation is tested against an extension of an already existing corpus of commands, that introduced explicit perceptual knowledge: this enabled deeper measures proving that more accurate disambiguation capabilities can be actually obtained.

pdf bib
Deep Learning in Semantic Kernel Spaces
Danilo Croce | Simone Filice | Giuseppe Castellucci | Roberto Basili
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Kernel methods enable the direct usage of structured representations of textual data during language learning and inference tasks. Expressive kernels, such as Tree Kernels, achieve excellent performance in NLP. On the other side, deep neural networks have been demonstrated effective in automatically learning feature representations during training. However, their input is tensor data, i.e., they can not manage rich structured information. In this paper, we show that expressive kernels and deep neural networks can be combined in a common framework in order to (i) explicitly model structured information and (ii) learn non-linear decision functions. We show that the input layer of a deep architecture can be pre-trained through the application of the Nystrom low-rank approximation of kernel spaces. The resulting “kernelized” neural network achieves state-of-the-art accuracy in three different tasks.

2016

pdf bib
A Language Independent Method for Generating Large Scale Polarity Lexicons
Giuseppe Castellucci | Danilo Croce | Roberto Basili
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Sentiment Analysis systems aims at detecting opinions and sentiments that are expressed in texts. Many approaches in literature are based on resources that model the prior polarity of words or multi-word expressions, i.e. a polarity lexicon. Such resources are defined by teams of annotators, i.e. a manual annotation is provided to associate emotional or sentiment facets to the lexicon entries. The development of such lexicons is an expensive and language dependent process, making them often not covering all the linguistic sentiment phenomena. Moreover, once a lexicon is defined it can hardly be adopted in a different language or even a different domain. In this paper, we present several Distributional Polarity Lexicons (DPLs), i.e. large-scale polarity lexicons acquired with an unsupervised methodology based on Distributional Models of Lexical Semantics. Given a set of heuristically annotated sentences from Twitter, we transfer the sentiment information from sentences to words. The approach is mostly unsupervised, and experimental evaluations on Sentiment Analysis tasks in two languages show the benefits of the generated resources. The generated DPLs are publicly available in English and Italian.

pdf bib
KeLP at SemEval-2016 Task 3: Learning Semantic Relations between Questions and Answers
Simone Filice | Danilo Croce | Alessandro Moschitti | Roberto Basili
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

2015

pdf bib
KeLP: a Kernel-based Learning Platform for Natural Language Processing
Simone Filice | Giuseppe Castellucci | Danilo Croce | Roberto Basili
Proceedings of ACL-IJCNLP 2015 System Demonstrations

2014

pdf bib
UNITOR: Aspect Based Sentiment Analysis with Structured Learning
Giuseppe Castellucci | Simone Filice | Danilo Croce | Roberto Basili
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)

pdf bib
HuRIC: a Human Robot Interaction Corpus
Emanuele Bastianelli | Giuseppe Castellucci | Danilo Croce | Luca Iocchi | Roberto Basili | Daniele Nardi
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

Recent years show the development of large scale resources (e.g. FrameNet for the Frame Semantics) that supported the definition of several state-of-the-art approaches in Natural Language Processing. However, the reuse of existing resources in heterogeneous domains such as Human Robot Interaction is not straightforward. The generalization offered by many data driven methods is strongly biased by the employed data, whose performance in out-of-domain conditions exhibit large drops. In this paper, we present the Human Robot Interaction Corpus (HuRIC). It is made of audio files paired with their transcriptions referring to commands for a robot, e.g. in a home environment. The recorded sentences are annotated with different kinds of linguistic information, ranging from morphological and syntactic information to rich semantic information, according to the Frame Semantics, to characterize robot actions, and Spatial Semantics, to capture the robot environment. All texts are represented through the Abstract Meaning Representation, to adopt a simple but expressive representation of commands, that can be easily translated into the internal representation of the robot.

pdf bib
A context-based model for Sentiment Analysis in Twitter
Andrea Vanzo | Danilo Croce | Roberto Basili
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

2013

pdf bib
Towards Compositional Tree Kernels
Paolo Annesi | Danilo Croce | Roberto Basili
Proceedings of the Joint Symposium on Semantic Processing. Textual Inference and Structures in Corpora

pdf bib
Textual Inference and Meaning Representation in Human Robot Interaction
Emanuele Bastianelli | Giuseppe Castellucci | Danilo Croce | Roberto Basili
Proceedings of the Joint Symposium on Semantic Processing. Textual Inference and Structures in Corpora

pdf bib
UNITOR-CORE_TYPED: Combining Text Similarity and Semantic Filters through SV Regression
Danilo Croce | Valerio Storch | Roberto Basili
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity

pdf bib
UNITOR: Combining Syntactic and Semantic Kernels for Twitter Sentiment Analysis
Giuseppe Castellucci | Simone Filice | Danilo Croce | Roberto Basili
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)

pdf bib
UNITOR-HMM-TK: Structured Kernel-based learning for Spatial Role Labeling
Emanuele Bastianelli | Danilo Croce | Roberto Basili | Daniele Nardi
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)

2012

pdf bib
Verb Classification using Distributional Similarity in Syntactic and Semantic Structures
Danilo Croce | Alessandro Moschitti | Roberto Basili | Martha Palmer
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
UNITOR: Combining Semantic Text Similarity functions through SV Regression
Danilo Croce | Paolo Annesi | Valerio Storch | Roberto Basili
*SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012)

2011

pdf bib
Structured Lexical Similarity via Convolution Kernels on Dependency Trees
Danilo Croce | Alessandro Moschitti | Roberto Basili
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

2010

pdf bib
Extensive Evaluation of a FrameNet-WordNet mapping resource
Diego De Cao | Danilo Croce | Roberto Basili
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

Lexical resources are basic components of many text processing system devoted to information extraction, question answering or dialogue. In paste years many resources have been developed such as FrameNet and WordNet. FrameNet describes prototypical situations (i.e. Frames) while WordNet defines lexical meaning (senses) for the majority of English nouns, verbs, adjectives and adverbs. A major difference between FrameNet and WordNet refers to their coverage. Due of this lack of coverage, in recent years some approaches have been studied to make a bridge between this two resources, so a resource is used to extend the coverage of the other one. The nature of these approaches leave from supervised to supervised methods. The major problem is that there is not a standard in evaluation of the mapping. Each different work have tested own approach with a custom gold standard. This work give an extensive evaluation of the model proposed in (De Cao et al., 2008) using gold standard proposed in other works. Moreover this work give an empirical comparison between other available resources. As outcome of this work we also release the full mapping resource made according to the model proposed in (De Cao et al., 2008).

pdf bib
Manifold Learning for the Semi-Supervised Induction of FrameNet Predicates: An Empirical Investigation
Danilo Croce | Daniele Previtali
Proceedings of the 2010 Workshop on GEometrical Models of Natural Language Semantics

pdf bib
Towards Open-Domain Semantic Role Labeling
Danilo Croce | Cristina Giannone | Paolo Annesi | Roberto Basili
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

2008

pdf bib
Automatic induction of FrameNet lexical units
Marco Pennacchiotti | Diego De Cao | Roberto Basili | Danilo Croce | Michael Roth
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

pdf bib
Combining Word Sense and Usage for Modeling Frame Semantics
Diego De Cao | Danilo Croce | Marco Pennacchiotti | Roberto Basili
Semantics in Text Processing. STEP 2008 Conference Proceedings