Controlling the Specificity of Clarification Question Generation

Yang Trista Cao, Sudha Rao, Hal Daumé III


Abstract
Unlike comprehension-style questions, clarification questions look for some missing information in a given context. However, without guidance, neural models for question generation, similar to dialog generation models, lead to generic and bland questions that cannot elicit useful information. We argue that controlling the level of specificity of the generated questions can have useful applications and propose a neural clarification question generation model for the same. We first train a classifier that annotates a clarification question with its level of specificity (generic or specific) to the given context. Our results on the Amazon questions dataset demonstrate that training a clarification question generation model on specificity annotated data can generate questions with varied levels of specificity to the given context.
Anthology ID:
W19-3619
Volume:
Proceedings of the 2019 Workshop on Widening NLP
Month:
August
Year:
2019
Address:
Florence, Italy
Editors:
Amittai Axelrod, Diyi Yang, Rossana Cunha, Samira Shaikh, Zeerak Waseem
Venue:
WiNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
53–56
Language:
URL:
https://aclanthology.org/W19-3619
DOI:
Bibkey:
Cite (ACL):
Yang Trista Cao, Sudha Rao, and Hal Daumé III. 2019. Controlling the Specificity of Clarification Question Generation. In Proceedings of the 2019 Workshop on Widening NLP, pages 53–56, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Controlling the Specificity of Clarification Question Generation (Cao et al., WiNLP 2019)
Copy Citation: