BeamSeg: A Joint Model for Multi-Document Segmentation and Topic Identification

Pedro Mota, Maxine Eskenazi, Luísa Coheur


Abstract
We propose BeamSeg, a joint model for segmentation and topic identification of documents from the same domain. The model assumes that lexical cohesion can be observed across documents, meaning that segments describing the same topic use a similar lexical distribution over the vocabulary. The model implements lexical cohesion in an unsupervised Bayesian setting by drawing from the same language model segments with the same topic. Contrary to previous approaches, we assume that language models are not independent, since the vocabulary changes in consecutive segments are expected to be smooth and not abrupt. We achieve this by using a dynamic Dirichlet prior that takes into account data contributions from other topics. BeamSeg also models segment length properties of documents based on modality (textbooks, slides, etc.). The evaluation is carried out in three datasets. In two of them, improvements of up to 4.8% and 7.3% are obtained in the segmentation and topic identifications tasks, indicating that both tasks should be jointly modeled.
Anthology ID:
K19-1054
Volume:
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Mohit Bansal, Aline Villavicencio
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
582–592
Language:
URL:
https://aclanthology.org/K19-1054
DOI:
10.18653/v1/K19-1054
Bibkey:
Cite (ACL):
Pedro Mota, Maxine Eskenazi, and Luísa Coheur. 2019. BeamSeg: A Joint Model for Multi-Document Segmentation and Topic Identification. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 582–592, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
BeamSeg: A Joint Model for Multi-Document Segmentation and Topic Identification (Mota et al., CoNLL 2019)
Copy Citation:
PDF:
https://aclanthology.org/K19-1054.pdf