Gianni Barlacchi


2023

pdf bib
Neural Ranking with Weak Supervision for Open-Domain Question Answering : A Survey
Xiaoyu Shen | Svitlana Vakulenko | Marco del Tredici | Gianni Barlacchi | Bill Byrne | Adria de Gispert
Findings of the Association for Computational Linguistics: EACL 2023

Neural ranking (NR) has become a key component for open-domain question-answering in order to access external knowledge. However, training a good NR model requires substantial amounts of relevance annotations, which is very costly to scale. To address this, a growing body of research works have been proposed to reduce the annotation cost by training the NR model with weak supervision (WS) instead. These works differ in what resources they require and employ a diverse set of WS signals to train the model. Understanding such differences is crucial for choosing the right WS technique. To facilitate this understanding, we provide a structured overview of standard WS signals used for training a NR model. Based on their required resources, we divide them into three main categories: (1) only documents are needed; (2) documents and questions are needed; and (3) documents and question-answer pairs are needed. For every WS signal, we review its general idea and choices. Promising directions are outlined for future research.

pdf bib
Strong and Efficient Baselines for Open Domain Conversational Question Answering
Andrei Coman | Gianni Barlacchi | Adrià de Gispert
Findings of the Association for Computational Linguistics: EMNLP 2023

Unlike the Open Domain Question Answering (ODQA) setting, the conversational (ODConvQA) domain has received limited attention when it comes to reevaluating baselines for both efficiency and effectiveness. In this paper, we study the State-of-the-Art (SotA) Dense Passage Retrieval (DPR) retriever and Fusion-in-Decoder (FiD) reader pipeline, and show that it significantly underperforms when applied to ODConvQA tasks due to various limitations. We then propose and evaluate strong yet simple and efficient baselines, by introducing a fast reranking component between the retriever and the reader, and by performing targeted finetuning steps. Experiments on two ODConvQA tasks, namely TopiOCQA and OR-QuAC, show that our method improves the SotA results, while reducing reader’s latency by 60%. Finally, we provide new and valuable insights into the development of challenging baselines that serve as a reference for future, more intricate approaches, including those that leverage Large Language Models (LLMs).

2022

pdf bib
FocusQA: Open-Domain Question Answering with a Context in Focus
Gianni Barlacchi | Ivano Lauriola | Alessandro Moschitti | Marco Del Tredici | Xiaoyu Shen | Thuy Vu | Bill Byrne | Adrià de Gispert
Findings of the Association for Computational Linguistics: EMNLP 2022

We introduce question answering with a cotext in focus, a task that simulates a free interaction with a QA system. The user reads on a screen some information about a topic, and they can follow-up with questions that can be either related or not to the topic; and the answer can be found in the document containing the screen content or from other pages. We call such information context. To study the task, we construct FocusQA, a dataset for answer sentence selection (AS2) with 12,165011unique question/context pairs, and a total of 109,940 answers. To build the dataset, we developed a novel methodology that takes existing questions and pairs them with relevant contexts. To show the benefits of this approach, we present a comparative analysis with a set of questions written by humans after reading the context, showing that our approach greatly helps in eliciting more realistic question/context pairs. Finally, we show that the task poses several challenges for incorporating contextual information. In this respect, we introduce strong baselines for answer sentence selection that outperform the precision of state-of-the-art models for AS2 up to 21.3% absolute points.

pdf bib
Product Answer Generation from Heterogeneous Sources: A New Benchmark and Best Practices
Xiaoyu Shen | Gianni Barlacchi | Marco Del Tredici | Weiwei Cheng | Bill Byrne | Adrià Gispert
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)

It is of great value to answer product questions based on heterogeneous information sources available on web product pages, e.g., semi-structured attributes, text descriptions, user-provided contents, etc. However, these sources have different structures and writing styles, which poses challenges for (1) evidence ranking, (2) source selection, and (3) answer generation. In this paper, we build a benchmark with annotations for both evidence selection and answer generation covering 6 information sources. Based on this benchmark, we conduct a comprehensive study and present a set of best practices. We show that all sources are important and contribute to answering questions. Handling all sources within one single model can produce comparable confidence scores across sources and combining multiple sources for training always helps, even for sources with totally different structures. We further propose a novel data augmentation method to iteratively create training samples for answer generation, which achieves close-to-human performance with only a few thousandannotations. Finally, we perform an in-depth error analysis of model predictions and highlight the challenges for future research.

pdf bib
semiPQA: A Study on Product Question Answering over Semi-structured Data
Xiaoyu Shen | Gianni Barlacchi | Marco Del Tredici | Weiwei Cheng | Adrià Gispert
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)

Product question answering (PQA) aims to automatically address customer questions to improve their online shopping experience. Current research mainly focuses on finding answers from either unstructured text, like product descriptions and user reviews, or structured knowledge bases with pre-defined schemas. Apart from the above two sources, a lot of product information is represented in a semi-structured way, e.g., key-value pairs, lists, tables, json and xml files, etc. These semi-structured data can be a valuable answer source since they are better organized than free text, while being easier to construct than structured knowledge bases. However, little attention has been paid to them. To fill in this blank, here we study how to effectively incorporate semi-structured answer sources for PQA and focus on presenting answers in a natural, fluent sentence. To this end, we present semiPQA: a dataset to benchmark PQA over semi-structured data. It contains 11,243 written questions about json-formatted data covering 320 unique attribute types. Each data point is paired with manually-annotated text that describes its contents, so that we can train a neural answer presenter to present the data in a natural way. We provide baseline results and a deep analysis on the successes and challenges of leveraging semi-structured data for PQA. In general, state-of-the-art neural models can perform remarkably well when dealing with seen attribute types. For unseen attribute types, however, a noticeable drop is observed for both answer presentation and attribute ranking.

pdf bib
From Rewriting to Remembering: Common Ground for Conversational QA Models
Marco Del Tredici | Xiaoyu Shen | Gianni Barlacchi | Bill Byrne | Adrià de Gispert
Proceedings of the 4th Workshop on NLP for Conversational AI

In conversational QA, models have to leverage information in previous turns to answer upcoming questions. Current approaches, such as Question Rewriting, struggle to extract relevant information as the conversation unwinds. We introduce the Common Ground (CG), an approach to accumulate conversational information as it emerges and select the relevant information at every turn. We show that CG offers a more efficient and human-like way to exploit conversational information compared to existing approaches, leading to improvements on Open Domain Conversational QA.

2016

pdf bib
LiMoSINe Pipeline: Multilingual UIMA-based NLP Platform
Olga Uryupina | Barbara Plank | Gianni Barlacchi | Francisco J. Valverde Albacete | Manos Tsagkias | Antonio Uva | Alessandro Moschitti
Proceedings of ACL-2016 System Demonstrations

2015

pdf bib
Distributional Neural Networks for Automatic Resolution of Crossword Puzzles
Aliaksei Severyn | Massimo Nicosia | Gianni Barlacchi | Alessandro Moschitti
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

pdf bib
SACRY: Syntax-based Automatic Crossword puzzle Resolution sYstem
Alessandro Moschitti | Massimo Nicosia | Gianni Barlacchi
Proceedings of ACL-IJCNLP 2015 System Demonstrations

2014

pdf bib
Learning to Rank Answer Candidates for Automatic Resolution of Crossword Puzzles
Gianni Barlacchi | Massimo Nicosia | Alessandro Moschitti
Proceedings of the Eighteenth Conference on Computational Natural Language Learning