Evaluating Robustness Of A QA System Through A Corpus Of Real-Life Questions

Laurianne Sitbon, Patrice Bellot, Philippe Blache


Abstract
This paper presents the sequential evaluation of the question answering system SQuaLIA. This system is based on the same sequential process as most statistical question answering systems, involving 4 main steps from question analysis to answer extraction. The evaluation is based on a corpus made from 20 questions taken in the set of an evaluation campaign and which were well answered by SQuaLIA. Each of the 20 questions has been typed by 17 native participants, non natives and dyslexics. They were vocally instructed the target of each question. Each of the 4 analysis steps of the system involves a loss of accuracy, until an average of 60 of right answers at the end of the process. The main cause of this loss seems to be the orthographic mistakes users make on nouns.
Anthology ID:
L08-1394
Volume:
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
Month:
May
Year:
2008
Address:
Marrakech, Morocco
Editors:
Nicoletta Calzolari, Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Daniel Tapias
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
Language:
URL:
http://www.lrec-conf.org/proceedings/lrec2008/pdf/244_paper.pdf
DOI:
Bibkey:
Cite (ACL):
Laurianne Sitbon, Patrice Bellot, and Philippe Blache. 2008. Evaluating Robustness Of A QA System Through A Corpus Of Real-Life Questions. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech, Morocco. European Language Resources Association (ELRA).
Cite (Informal):
Evaluating Robustness Of A QA System Through A Corpus Of Real-Life Questions (Sitbon et al., LREC 2008)
Copy Citation:
PDF:
http://www.lrec-conf.org/proceedings/lrec2008/pdf/244_paper.pdf