BiomedSumm: Shared task on Biomedical Summarization

Event Notification Type: 
Call for Participation
Location: 
at the Text Analysis Conference (TAC 2014)
Monday, 17 November 2014 to Tuesday, 18 November 2014
State: 
Maryland
Country: 
USA
City: 
Gaithersburg
Submission Deadline: 
Monday, 30 June 2014

CALL FOR PARTICIPATION

BiomedSumm: Shared task on Biomedical Summarization
at the Text Analysis Conference (TAC 2014)

November 17-18, 2014

http://www.nist.gov/tac/2014/BiomedSumm/

INTRODUCTION

Since 2001, the US National Institute of Standards and Technology
(NIST) has organized large-scale shared tasks for automatic text
summarization within the Document Understanding Conference (DUC) and
the Summarization track at the Text Analysis Conference (TAC).
However, while DUC and TAC generated a wealth of evaluation resources
for news summarization, far less material is available to support
development of methods of automatic summarization in other domains
where there is also a pressing need for distillation and management of
complex information presented in vast amounts of text.

Today, finding an overview of specific developments in biomedicine
requires painstaking work. The existence of surveys tells us that such
information is desirable, but such surveys require considerable time
and human effort, and cannot keep up with the rate of scientific
publication. For example, papers are added to PubMed alone at the rate
of about 1.5 articles per minute, precluding the possibility of manual
summarization of the scientific literature.

The goal of the TAC 2014 Biomedical Summarization track (BiomedSumm)
is to develop technologies that aid in the summarization of biomedical
literature.

You are invited to participate in BiomedSumm at TAC 2014. NIST will
provide test data for the shared task, and participants will run their
NLP systems on the data and return their results to NIST for
evaluation. TAC culminates in a November workshop at NIST in
Gaithersburg, Maryland, USA.

All results submitted to NIST are archived on the TAC web site, and
all evaluations of submitted results are included in the workshop
proceedings. Dissemination of TAC work and results other than in the
workshop proceedings is welcomed, but the conditions of participation
specifically preclude any advertising claims based on TAC results.

SHARED TASK

There are currently two ways in which scientific papers are usually
summarized: first, by the abstract that the author provides; second,
when a paper is being cited, a brief summary of pertinent points in
the cited paper is often given. However, both of these methods fall
short of addressing the reader's needs, which are: for the abstract,
to know what the lasting influence of a paper is; for references, to
know how the author originally expressed the claim.

The set of citation sentences (i.e., "citances") that reference a
specific paper can be seen as a (community created) summary of that
paper (see e.g. [1,2]). The set of citances is taken to summarize the
key points of the referenced paper, and so reflects the importance of
the paper within an academic community. Among the benefits of this
form of summarization is that the citance offers a new type of context
that was not available at the time of authoring of the citation:
often, in citation, papers are combined, compared, or commented on -
therefore, the collection of citations to a reference paper adds an
interpretative layer to the cited text.

The drawback, however, is that though a collection of citances offers
a view of the cited paper, it does not provide a context, in terms of
data or methods, of the cited finding; if the citation is of a method,
the data and results may not be cited. More seriously, a citing author
can attribute findings or conclusions to the cited paper that are not
present, or not intended in that form (e.g., the finding is subject to
specific experimental conditions which are not cited). To provide more
context, and to establish trust in the citance, the reader would need
to see, next to the citance, the exact span(s) of text (or tables or
figures) that are being cited, and be able to link in to the cited
text at this exact point.

To give the abstract-as-summary the benefit of community insight, and
to give the citances-as-summary the benefit of context, we explore a
new form of structured summary: a faceted summary of the traditional
self-summary (the abstract) and the community summary (the collection
of citances). As a third component, we propose to group the citances
by the facets of the text that they refer to.

A pilot study indicated that most citations clearly refer to one or
more specific aspects of the cited paper. For biomedicine, this is
usually either the goal of the paper, the method, the results or data
obtained, or the conclusions of the work. This insight can help
create more coherent citation-based summaries: by identifying first,
the cited text span, and second, the facet of the paper (Goal, Method,
Result/Data or Conclusion), we can create a faceted summary of the
paper by clustering all cited/citing sentences together by facet.

Use Case: This form of scientific summarization could be a component
of a User Interface in which a user is able to hover over or click on
a citation, which then causes a citance-focused faceted summary of the
referenced paper to be displayed, or a full summary of the referenced
paper taking into account the citances in all citing papers for that
reference paper. Finally, this form of scientific summarization would
allow a user to read the original reference paper, but with links to
the subsequent literature that cites specific ideas of the reference
paper.

The automatic summarization task is defined as follows:

Given: A set of Citing Papers (CPs) that all contain citations to a
Reference Paper (RP). In each CP, the text spans (i.e., citances)
have been identified that pertain to a particular citation to the RP.

Task 1a: For each citance, identify the spans of text (cited text
spans) in the RP that most accurately reflect the citance. These are
of the granularity of a sentence fragment, a full sentence, or several
consecutive sentences (no more than 5).

Task 1b: For each cited text span, identify what facet of the paper
it belongs to, from a predefined set of facets.

Task 2: Finally, generate a structured summary of the RP and all of
the community discussion of the paper represented in the citances. The
length of the summary should not exceed 250 words. Task 2 is
tentative.

Evaluation: Task 1 will be scored by overlap of text spans in the
system output vs gold standard. Task 2 will be scored using the
ROUGE family of metrics [3]. Again, Task 2 is tentative.

Data for the biomedical summarization task will come from the domain
of cell biology. Data will initially be distributed through a TAC
shared task on biomedical document summarization. It will be archived
on SourceForge.net at tacsummarizationsharedtask.sourceforge.net.

This corpus is expected to be of interest to a broad community
including those working in biomedical NLP, text summarization,
discourse structure in scholarly discourse, paraphrase, textual
entailment, and/or text simplification.

REGISTRATION

Organizations wishing to participate in the BiomedSumm track at TAC
2014 are invited to register online by June 30, 2014. Participants are
advised to register and submit all required agreement forms as soon as
possible in order to receive timely access to evaluation resources,
including training data. Registration for the track does not commit
you to participating in the track, but is helpful to know for
planning. Late registration will be permitted only if resources
allow. Any questions about conference participation may be sent to the
TAC project manager: tac-manager [at] nist.gov.

Track registration: http://www.nist.gov/tac/2014/BiomedSumm/registration.html

WORKSHOP

The TAC 2014 workshop will be held November 17-18, 2014, in
Gaithersburg, Maryland, USA. The workshop is a forum both for
presentation of results (including failure analyses and system
comparisons), and for more lengthy system presentations describing
techniques used, experiments run on the data, and other issues of
interest to NLP researchers. TAC track participants who wish to give a
presentation during the workshop will submit a short abstract
describing the experiments they performed. As there is a limited
amount of time for oral presentations, the abstracts will be used to
determine which participants are asked to speak and which will present
in a poster session.

IMPORTANT DATES

Early May 2014: Initial track guidelines posted
End of May 2014: Distribution of first release of training data
June 30, 2014: Deadline for registration for track participation
July 31, 2014: Final release of training data
August 11, 2014: Blind test data released
August 22, 2014: Results on blind test data due
Mid-September 2014: Release of individual evaluated results to participants
October 7, 2014: Short system descriptions due
October 7, 2014: Workshop presentation proposals due
Mid-October 2014: Notification of acceptance of presentation proposals
November 1, 2014: System reports for workshop notebook due
November 17-18 2014: TAC 2014 workshop in Gaithersburg, Maryland, USA
February 15 2014: System reports for final proceedings due

REFERENCES

[1] Preslav I. Nakov, Ariel S. Schwartz, and Marti A. Hearst (2004)
Citances: Citation sentences for semantic analysis of bioscience text.
SIGIR 2004.

[2] Vahed Qazvinian, Dragomir R. Radev. 2010. Identifying Non-explicit
Citing Sentences for Citation-based Summarization. In Proceedings of
Association for Computational Linguistics.

[3] Chin-Yew Lin (2004) ROUGE: A package for automatic evaluation of
summaries. Proceedings of "Text Summarization Branches Out," pp. 74-81.

ORGANIZING COMMITTEE

Kevin Bretonnel Cohen, University of Colorado School of Medicine, USA
Hoa Dang, National Institute of Standards and Technology, USA
Anita de Waard, Elsevier Labs, USA
Prabha Yadav, University of Colorado School of Medicine, USA
Lucy Vanderwende, Microsoft Research, USA