DeepPaperComposer: A Simple Solution for Training Data Preparation for Parsing Research Papers

Meng Ling, Jian Chen


Abstract
We present DeepPaperComposer, a simple solution for preparing highly accurate (100%) training data without manual labeling to extract content from scholarly articles using convolutional neural networks (CNNs). We used our approach to generate data and trained CNNs to extract eight categories of both textual (titles, abstracts, headers, figure and table captions, and other texts) and non-textural content (figures and tables) from 30 years of IEEE VIS conference papers, of which a third were scanned bitmap PDFs. We curated this dataset and named it VISpaper-3K. We then showed our initial benchmark performance using VISpaper-3K over itself and CS-150 using YOLOv3 and Faster-RCNN. We open-source DeepPaperComposer of our training data generation and released the resulting annotation data VISpaper-3K to promote re-producible research.
Anthology ID:
2020.sdp-1.10
Volume:
Proceedings of the First Workshop on Scholarly Document Processing
Month:
November
Year:
2020
Address:
Online
Editors:
Muthu Kumar Chandrasekaran, Anita de Waard, Guy Feigenblat, Dayne Freitag, Tirthankar Ghosal, Eduard Hovy, Petr Knoth, David Konopnicki, Philipp Mayr, Robert M. Patton, Michal Shmueli-Scheuer
Venue:
sdp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
91–96
Language:
URL:
https://aclanthology.org/2020.sdp-1.10
DOI:
10.18653/v1/2020.sdp-1.10
Bibkey:
Cite (ACL):
Meng Ling and Jian Chen. 2020. DeepPaperComposer: A Simple Solution for Training Data Preparation for Parsing Research Papers. In Proceedings of the First Workshop on Scholarly Document Processing, pages 91–96, Online. Association for Computational Linguistics.
Cite (Informal):
DeepPaperComposer: A Simple Solution for Training Data Preparation for Parsing Research Papers (Ling & Chen, sdp 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.sdp-1.10.pdf
Video:
 https://slideslive.com/38940719